how to think rationally
a philosopher humbly submits a new paradigm for how to be a rational agent
I first contacted philosopher Dr Ruth Chang because of her work on how to make decisions. I think it’s generally accepted that millennials suffer from decision paralysis, use yelp reviews and their equivalents to guide every choice, and this is absolutely true for me. One reoccurring decision I’ve struggled with for 3+ years is how to live. Without invoking too much of a motherhood-penalty evil eye, lol, I don’t understand how we are supposed to function as people AND parents AND workers1. When my child is sick (ie the duration of toddlerhood), there is no way to know whether the illness will get better or worse, and how long it will last — and yet I have a finite number of sick days. So when should I take a sick day? How can I know which days will be the worst? Next decision: when I invariably catch the kid’s virus, do I rest, or do I tough it out? How can I know that THIS day is miserable enough to justify needing it, but tomorrow won’t be? Is it a marathon or a sprint2? When is time off necessary/restorative vs indulgent?
I tried a few blunt little tests to determine when to rest. My first idea, found empirically: I should probably rest if I find the idea of coffee repulsive. I assume the repulsion indicates a sinus infection. But this is impractical, too generous a prescription bc, at least for parents of young kids, one has a sinus infection (of fluctuating severity) for >2y solid. One can work with a sinus infection. This test is helpful, but not sufficient.
OK, another, sharper tool: in a Zoom meeting once, a colleague was lying in bed actively taking a covid test. One of our editors was like, hey, I think if you have to lie down, generally you shouldn’t be working. My ears perked up. Sounds like a rule!! There is still some subjectivity — what is “have to”? How do you know? — and it doesn’t capture a few other variables that seem necessary, like the health of your child, so it’s better, but still not sufficient for me.
I decided I needed a formula in which I could input all this information, a mathematical model that could describe components of my overall condition and then give me a clear indication of necessary action. I recruited John Taylor, of the Taylor Rule, to help.
I thought I would assemble a team to help me tackle this. But the second person I tapped kind of threw a wrench in it.
That’s Ruth Chang, chair and professor of Jurisprudence at Oxford. When I reached out, she invited me for lunch in a leafy suburb outside Berlin where she was doing a fellowship. We talked for three hours. She speaks with precision, listens and responds thoughtfully. And she seemed to find my quest misguided and horrifying. Why was I so incapable of managing this basic human function? Are there others like you? she asked, with sincere curiosity. Yes, so many. She seemed to find my whole brain kind of broken. Why could I only think in costs and benefits? At one point she interrupted me mid-question, too aghast at my sociopathic logic chains to continue: “you really are corrupted by utilitarianism,” she said. From the transcript:
“It seems to me like you revel in complexity and subtleness and nuance and stuff like that. But you’ve been taught here’s this model through which you can see things. And that model is not correct. Like, it can be correct for lots of things; it is not correct for life. This is not my opinion — like, I — someone who spent 30 years studying this stuff, it’s not MY — it really is the case, okay? That you cannot, you cannot do it. You cannot write an equation for decision-making about the things that we care about in life….You cannot do it.”
….well!! I am still sitting with that. It sincerely hurts to consider that there are other ways of functioning, ways that might be better (better than optimizing?? confusing). Clearly I’m not there yet. Despite her informed objection, I still want to make my equation, a neat little formula that will tell me when to be kind to myself and when to be mean. If you are interested in helping me create this misguided model, please get in touch!!
While she couldn’t (or wouldn’t) solve my operational difficulties, Ruth did give me the gift of understanding her proposal for how to be a rational agent. With practice, her way may provide an escape hatch from my prison of optimization, because (as you’ll read) the choosing imbues the chosen thing with value. This gives everyone a sort of decision-making magic wand: whatever they choose becomes the thing they should have chosen! Dizzying, and thrilling.
My beloved Tagesspiegel published a condensed version of our conversation. The published interview is in German and also behind a (very good) paywall, but Tagesspiegel has generously permitted me to print the English version here.
Philosopher Ruth Chang: “We sit on the sofa and try to find out: Does the world tell me A or B?”
Dr. Chang, as a Harvard law graduate and philosopher at Oxford, you focus on the process of how we make decisions. Why is that so hard sometimes?
Because we are shaped by a way of thinking in which we humans are conceived as rational agents. Our job is to discover the right thing, and if we fail to discover the right thing, we will make a mistake. That builds in a natural fear of decision-making or results in a lot of agonizing – be that the choice to have a child or not, to live in the country or a city, to marry Bob or to marry Scott.
[ed note: i am married to a Scott. there was never a Bob!!! ]
So you’re saying that, in the dominant way of thinking, there is a “right” thing. What does that mean?
There, we're like scientists: we have to go discover all the pros and cons, and if we're really good at doing the mathematics, we can weigh them and we'll have an answer. We are sitting on the couch trying to figure out, does the world tell me to do A or B? When we instead approach things with the paradigm of parity, there's space for us to think about our relation to the world not as passive, but as active. We get up off the couch and actually endow things with value and make them best for ourselves. Most of life is like that. When we actually decide, many people do it that way, but we often still have another concept in mind.
But many people seem pretty sure, at least in things like deciding to marry Bob or Scott.
Sure, in the paradigm that I criticize there are clear decisions: A is better than B, A is worse than B. Or A and B are equally good.
So where is the problem?
When you can’t compare A and B. In that case, I think that the old paradigm of rationality is off the table.
What’s the alternative?
In the new model I propose, there is an additional way to conceive rationality. A and B can be qualitatively very different, but lie in the same neighborhood of value. I call that a case of parity. Most of the interesting hard cases we face in life are cases between alternatives that are on a par. Like, sometimes, the decision between Bob and Scott. [ed note: seriously, no bob!] When A or B are on a par, we have this capacity to put ourselves behind one option, and endow that option with value it didn't have before. When options are on a par, you have this capacity to commit to one of them. You throw yourself behind something – and make that thing even better for you.
Can you give an example for such a decision between options that are “on a par”?
The people who will understand what I'm talking about most, I think, are parents. When the newborn pops out of the womb or from the adoption agency or whatever, there's a shift in the way the person sees the world going forward. They will find less pressing and attractive buying yet another car or getting the latest iPhone, because they are focussing on helping their kids grow and develop, spending their energies instead organizing violin lessons and arranging playdates. You have decided and committed to this child, and now, all the things that were important before don't seem to have as much value, or any value, or the same kind of value, whereas all these other things suddenly have this new kind of value. It changes the way you see things. We add value to the things we commit to – throughout our lives.
So we make our decision the right decision by committing to its consequences?
Yes. And in personal relationships, we already understand that. Rom coms are all about waiting for a protagonist to commit to another protagonist.
That seems exhausting, to always engage actively in the options life gives you, always add something to them…
Luckily, we don’t care about all things in the same intensity. Think about haircuts. You and I, no offense, don’t seem to commit to our haircuts. [ed note: ….damn.] I just go in and say, make it easy to take care of. I just drift into a hairstyle. I can treat the options as if they were equally good. But a fashion model has to decide which haircut to go for, and he has to commit to one style as opposed to another, because that says something about who he is. Hard choices are not just about careers or places to live or whether to have families, but also about those small things that define us. They are the things that build us up, build our characters. That's who we are. You and I don't care how our hair looks [ed note: air-drying is not working out for me as well as i’d thought, helpful feedback!!], but we care about other things. And the good thing is: Caring is something we can — and have to — practice. But if you have the prevalent psychological picture of your place in the world, then chances are you won’t even know you’ve got this capability.
This idea is the exact opposite of the economic theory of revealed preferences, which says that peoples’ preferences show up in what they buy, what they spend money on.
It's the opposite of standard economic approaches to decision-making, rational choice theory, and social choice theory. It's a view about the structure of normativity and our role as rational agents that would make a [formula] for life impossible [ed note: ... 👀]. There is a power at the heart of this paradigm shift. And it has heavy consequences for another big area of thought: on the way AI design is currently being done.
How is that?
All these engineers [at technology companies like Meta, Google, and OpenAI] quite naturally borrow from economics and psychology, that literature on decision making. So they build the machines that way. We are going to have machines that make decisions for us, and there will not be any room for value and commitment. For genuine hard choices. There's going to be value misalignment everywhere. It's a disaster waiting to happen. We have to build machines that leave room for us to actively decide.
Humans in the loop. What’s an example?
It's on the horizon already: hiring. Hiring is a drag. It takes a lot of time and emotional energy. Let's outsource it to AI. So an AI will take various criteria that we care about -- productivity, loyalty, team spiritedness -- and rank candidates with respect to each of those things.
If we accept this new paradigm, the machine will say things like, Adam is very loyal, a great team player. Everyone loves him. He will execute whatever the boss wants. But he works a bit slow. Biff is loyal in a different way: he'll tell the boss that he's got a cockamamie idea. Biff is not a great team player. You wouldn't want to have a beer with him. But he's super productive. In the old model, Adam and Biff, one has got to be better overall. The machine will rank one of them over the other or flip a coin. On the new model, the output could be Adam and Biff are on a par, and then, getting that news from the machine, the hiring committee has to sit down and look at the dossiers of Adam and Biff and think, do we want the productive guy who's a bit of an asshole, or the not-productive guy whom everyone likes? Which value can we stand behind? When they decide, let's say, we care about team spirit, they endow that value with extra oomph. And that affects the hiring decisions going forward. That's the only way in which machines will ever align with human values. I think we can actually redesign AI in that direction.
Can you also give examples of the dangers if developers don't follow your model, worst-case scenario where an algorithm force-ranks things that are on a par?
Yes, we perpetuate bias. A simple case: If we treat John as a better hire than Mary when they're in fact on a par, then the data point that John is better than Mary [becomes] a basis for more bad decisions: she ends up at a lower-paying job, lives in a depressed zip code, can't get a car loan, etc. Mary gets perpetually shafted, simply because the algorithm didn't properly recognize that the choice between John and Mary was hard, and instead forced a falsehood into reality — namely, that John is a better hire than Mary.
[ed note: the Mary is always the best hire!! always true, u can just program that in]
A worst-case scenario: Suppose you're Biden trying to decide whether to order a drone strike on a certain target. There's some evidence that one of the FBI's most wanted terrorists is living in that compound and some probability that if you order the drone strike, the terrorist will be killed. There is also some probability that a certain number of innocent lives will be lost in the strike. All of this data is generated by algorithms. Now how do you put the data together? How low in the probability of killing the terrorist can you appropriately go and how high in innocent lives lost? If we use an algorithm to decide the tradeoffs, we won't make room for hard cases. The algorithm will simply force a falsehood into reality, for example, that ordering the strike is better than not ordering the strike. Currently, at least in the US, the algorithms developed in war games all have "the human ON the loop," that is, no machine ever makes a final decision about whether to order a strike — a human always has veto power. But this isn't enough. We need our algorithms to reflect reality: when you have two factors in a choice that pull in opposite directions, there will likely be hard choices. We need to build algorithms that reflect that reality. I suggest we build them so that the algorithms recognize a wide swathe of hard cases, kicking the decision back to the human.
AI-based software has often been criticized for perpetuating bias — i.e. not recognizing the faces of Black people as human, or favoring white-sounding names on resumes. How can your philosophy help us circumvent this bias in ourselves and in AI?
In the AI design I propose, we minimize unfairness by leaving to Machine Learning rankings of things according to only very narrow desiderata, like profitability, how high someone scored on a test or on a human-conducted job performance review, etc. By narrowing the criteria judiciously, we avoid putting big, fluffy goals into machines like 'best hire overall,' which is exactly where bias mostly creeps in. There is then the problem of how to put those narrow rankings generated by a machine together. I think it is wise to have really broad conditions for when a choice is hard. That would leave a lot of choices up to us. Now the question is, how does this help us? If lots of choices with machine input turn out to be hard, and so we humans need to make them, how does that help with bias since we know we're biased?
I think that instead of microscopic analyzing the pros and cons of individual applicants, this process opens up the space for a different kind of discussion: what can we as a firm commit to, or stand behind? Yes, we all have biases and prejudices, but those are less apparent in discussions about abstract values we can stand behind and commit to.
What has been the most difficult decision for you so far?
My hardest decision in life was the rather humdrum one of whether to be a philosopher or a lawyer. Lawyer: family pressure, financial security. But so boring, I hated it. Philosopher: incredibly difficult work, not entirely pleasant either, very male-dominated and women-unfriendly discipline, possible homelessness. But I loved it and could commit to it. So I did.
[ok one final editor’s note:
I CARE ABOUT MY HAIR! at least a little. In this case on an empty bus in downtown Charleston. thank u.]
If, like me, you need more in this vein: Ruth recommended Lorraine Daston’s 2022 book, Rules: A Short History of What We Live By3. I’m still reading but am loving it.
If you need MORE more, I have ~3,000-5,000 words in my heart about Ruth and how her new paradigm upends the life’s work of her mentor and resolves a huge longstanding paradox in moral philosophy!!!
obviously part of the answer is to add people — to make enough money that you can hire other people to help with the parent part, and (to the extent possible) with the being-a-person part (cook, clean, etc). For the purposes of this thought exercise, suspend this option.
this is the second time in 3y that I have been compelled to recommend a Princeton University Press book, and that’s without getting into Justin Smith-Ruiu. The editors there have my number!!! keep up the great work!!
Chang’s concept of parity is (dare I say it) on par with the word embeddings underpinning LLMs. For example, in a vector space with axes “sex” and “royalty”, the “distance” between “king” and “queen” and “man” and “woman” is the same so you can do vector math like king - man + woman = queen.
PS: it’s funny that your dog is named Demi because I got young Demi Moore vibes from your photo in the Gross piece.
i hate to admit it but i think i first learned this lesson from harry potter, when dumbledore reassures harry that gryffindor was the right house for him simply bc he chose it
also ruth chang's ted talk on making hard decisions has brought me much solace and is also one of my frequently recommended pieces of advice for friends who are stuck in a this-or-that situation. so cool you got to talk to her