And finally, population ethics and the repugnant solution. As I’ve said various times I joined my friend’s reading group on ethics around a month ago and we’ve covered various writings from Plato’s socratic dialogues to Isiah Berlin. Now we had a discussion (for lack of a better subject on) Parfit’s “repugnant solution” in population ethics.
Oh population ethics , how platonified can you get. If you thought using the bell curve to describe uncertainty in social matters was bad, then you are going to like this. We are descending so low that we are going to need to blast through the bottom of the barrel to get to where we’re going. Reading about the repugnant conclusion in the Stanford encyclopedia of philosophy I was thoroughly dissatisfied. If this is the state of ethics our ethics is screwed. I couldn’t even bring myself to read the article in detail because it was so dull and unrelated to ethics!
First you decide to reduce your discussion to this “utilitarianism” bullshit where everything is about maximizing utility (a way of measuring happiness), and then you talk about how a society with few very happy people might have less “total happiness” than a society in which there were some sad people (arbitrarily sad) and innumerable “somewhat” happy people, and claim that it’s unintuitive. What’s really unintuitive is how you could imagine that happiness could just straight up be measured with good results. What doesn’t make sense is that you would seemingly assume that if you ignored virtually all facets of experience and focused on “happiness” only that you would get some sort of coherent and useful theory.
But let’s assume that actually yes, there is a single quantity that you can measure and get good results with. Maybe it’s ok then to talk about properties of utility functions? No! If we don’t know the actual mapping (or something very close) then it’s absurd to think that you could somehow reason about it’s conclusions with respect to your intuitions! The repugnant solution is both intuitive (take this) and unintuitive if you just change the mapping (i.e. Torbjörn anchors 0 as not a bad life, while we might anchor 0 as a pretty terrible life, in which case the repugnant solution is more repugnant). What feels linear  to us may not be linear in our mapping. We can just change the mapping but then it probably won’t be linear to someone else. These are all things which rely on interpretation so why use mathematical methods? To use mathematical methods you have to leave the axioms up to interpretation which means that the whole game is boloney and you can just reinvent the rules to suit your conclusions. Want the repugnant solution to happen? Let it do so. Want to avoid it? Just change the rules (it’s actually “not repugnant” because everyone is “pretty well off” and we have never yet experienced a society with such immense happiness as A in the article). For example, they define terms like what a “life worth living” is by saying that anything above zero is “worth living” and anything below it is not.
Now this is not to say that some moral or ethical theory is useful, or arguably necessary (less likely, but I’ll buy that too). It should just not be so absurdly detached from reality while claiming to be the “most scientific” because “math”. Thought experiments can count sometimes. However, if you are using only thought experiments you are probably not basing anything on reality! I mentioned this during our group  and got shut down real quick (though this is also my fault for not "playing the game" nor being the most coherent). Now, there are some good counter-points—namely, that falsifiability is kind of a fishy term regarding morals, since morals are what we use to define how we should behave, which in turn allows for a falsifiable theory of how to function to be in accordance with the morals, but not one for the morals themselves since there is nothing to test against. That said I disagree that it’s a hopeless endeavor. The fact that people tend to agree on basic moral precepts like: not killing random people on the street, not stealing from the homeless, and not being an asshole, suggests that there is something to test against—namely, our intuitions. Now it looks like I’m backing myself into a corner where suddenly the whole utilitarianism thing looks good again because “hey guys, we’ve been actually doing (thought) experiments this whole time, against our intuitions.” Primo, no. Secondo, absolutely not. Terzo, our intuitions stem for our experiences, not the other way around. If you just do thought experiments devoid of actual real experiences, you are giving your intuitions nothing to latch on to. We know how to function in contexts, not in a vacuum. When you do a thought experiment you get to pick the context. This is dangerous because not only is it sometimes not obvious that you are making contextual assumptions (or what those entail), but it also means that you’ll just pick a context that you chose specifically to deny or accept the theory’s conclusion based on whatever subjective first impression you had. When I see a moral theory with numbers in it I tend to just default to whatever context will let me show (at least myself) that it’s a theory for imbeciles, for example. Also, as I mentioned it’s subjective, and no one really can pinpoint what contexts the other people are thinking about.
What I envision ethical philosophers doing is quite different from all this thought experiment “let’s explore what’s in the box before we venture out of it” nonsense. You should go out and live life and try and get unique experiences that cover a lot of mental territory. These will help you form intuitions. Then a group of you worldly philosophers should get together and perform “experiments” that roughly work like this: you go and do tasks and make decisions that have some moral import (as a lot do) and then you observe while you make decisions what transpired and how your intuitions compared to what you did and what could have been better, etcetera. Are you happier because of your decisions? Is society better? These are up to interpretation but there is still value in actually trying things out, because now, your interpretations about what you in a shitty situation, for example, were not just made up categorial, conceptual, platonisms while you were in comfort not understanding what any of what went on later actually meant. It’s like having someone who’s never gone to war, never fought a war, never even seen a war, but who has read plenty of books, be chosen to lead your soldiers at the front lines. It doesn’t make sense to separate thinking so much from what you are thinking about (since you tend to forget). In a nutshell: see the world through a moral lens as you experience it so you can form reasonable ideas. You could also try to make some sort of lab, but since this is “up to interpretation” and social that’s gonna be harder.
Speaking of “reasonable” theories, one obvious justification for speaking about ethics in these highly abstract and hypothetical situations appears to be, as my friend told me, a quest to form reasonable hypotheses so that we may be able to test them later or whatever. There is a big problem with this. A few actually.
Firstly, what is reasonable depends on context. If we are talking about numbers I will not be thinking about my experiences all too much and instead will be both struggling to figure out what the numbers mean  and figure out ingenious ways to combine them so that we can get conclusions  that I’ll cherry pick unwittingly ruining the whole endeavour. I want to be thinking in the context of the situation I’m in. If the situation is so unreal (societies A and Z) that I can’t really get the context non-conceptually, it’s hard to make something of value .
Secondly, the formation of “reasonable” theories is not necessarily a good thing. People will be using these on a massive scale before they are “tested” or using them as justification making them fodder for extremists who want some “logical” explanation for their hatred so that they appear more justified or whatever. Moreover, too much information is toxic. You just begin to get lost in the noise and go on completely useless tangents if there are too many options. The more theories you make up that no one is going to use for a long time, the more papers I need to read to get what you are all even talking about and if two theories are actually just the same theory but worded differently and if not where they differ and so on. Who lives will be spent trying to build on something that’s no good, because we just can’t explore all the options and that will just suck. It will slow down the development of useful ethics  and make life harder.
Thirdly, not only is the overwhelming amount of information toxic, but the lack of subsequent time is harmful too. People will spend absurd amounts of time exploring things which help no one, are used by no one, and will never be seen by anyone other than those in their (probably) academic intellectual circlejerk. I don’t want to get into the morality about whether you are doing bad by not doing good since it depends, but in this case, it seems strongly superior to just focus on simple theories that are more grounded in reality and normal people can understand, use, etcetera. There is power in actually executing the obvious. Most obvious things are not done. You can get far by just actually doing obvious things. Why do you need to go and build up a big tower on a swamp. Just add one room to a nice house on a hill and eventually someone will complete the house. Be more patient.
Lastly, reality is often not reasonable. I don’t care if your theory is reasonable, I care if it works, if it makes my life better, if it makes society better, so on and so forth. How do you claim to know that what is reasonable is right? None in our discussion has been through tough ethical decisions, and yet we talk about what some central planner  should do in some different extreme societal situations. If we have an extreme situation how are you to know that the decision made should not also be quite “unreasonable” and that that would be the best? While I’ve outlined before that much of our experimentation should compare with our intuitions, keep in mind that not only are our intuitions living, but they also are only relevant in fields where we have intuitive experience. I have good intuitions about whether I should be an asshole during a board game—don’t, but I don’t have such good intuitions when I need to decide who to kill, if anyone at all, from a position of power. Urging that we should make reasonable theories is like urging that quantum mechanics should work as we’d expect. You know what? They found out it isn’t. Reality does what it does and that’s just bad luck Brian for your clockwork universe (“oh but it was so intuitive and reasonable”). Maybe the physics analogy isn’t doing it for you so consider the Europeans when they went to Africa in the 1800’s. They saw that the natives had a bunch of towns on hills and thought “what dummies, towns should be next to rivers because you can transport people with boats and have easy water nearby.” Now, most African cities are next to rivers and ever since then they’ve had loads of Malaria. Why? Because mosquitos love to be next to the water, and not up in the hills. Turns out the natives knew something, who would have guessed. Sometimes your “reasonable” theory is totally unreasonable if you know a little more information. Trying to come up with “reasonable” hypotheses you need to understand that you can only talk about the world you know very well and beyond that it will fray tremendously. Tremendously. Thought experiments on edge conditions won’t work because you don’t have a gut sense for how those edge conditions work, so your theory meant to cover all these bases actually just covers one. Instead of spending time thinking thoughts, take a little while to observe a lot first, then think about that.
Overall, forming so-called reasonable theories based on reductionist platonicist axioms detached from real world experience and not meant for real world people is not a praise-worthy pastime. Ethics is important, but it should be something that people can understand, that you can “test” in some broad sense of the word that entails at least a shred of falsifiability, and that is somehow born of the world and/or not completely orthogonal to it. And if you’re doing this reductionist thing and the argument is that “actually, this is not detached from the world, we are ethicists, we know that we are talking about the world” just know that they don’t know that they don’t know, and that’s the most dangerous form of unknowledge. To be talking about the world you need to feel it in your gut. This is what I mean by “getting” it is understanding in a way which is non-conceptual as opposed to conceptual (categorial, logical, you can reason about it, formal-system-like, etcetera). If you don’t feel it in your gut and can’t map the things you are talking to to specific experiences, feelings, ideas, ways of thinking, and so forth, you will forget that you are talking about the world and then explore inside the box of your formal-system-like theory  while not noticing that we are drifting ever farther from reality because “actually, we’ll get back to it later.” It’s like the story of the frog that’s in a pot of water on the stove. The stove is turned on and the frog initially does nothing because the water is still cool. It warms up a bit and the frog notices, but thinks “it’s only a little warm, no harm in staying.” After that it begins to become uncomfortable, but the frog never leaves because “it’s not dangerous yet.” Eventually it’s boiled. The analogue is that we are walking around deluded into thinking we are discussing “ethics” but really no one knows what we are talking about. We start out talking about reality but drift ever farther because of the urge to reduce problems to make them more “tractable” as well as the desire to be “general” and “abstract” to speak about “all of reality and not just anecdotes” which really just turns out we end up speaking of none of reality. This is how platonicity happens.
I had more to say but I’ve forgotten it because I was so annoyed diving into this subject. Anyways, if you do want to talk about happiness logically/formally here are some ideas. Begin with multiple dimensions. Happiness is not just “happiness” but a collection of pleasure, pride, well-being, enjoyment, bliss, and other positive feelings. Draw separations where you can and form (temporary) categories. Don’t treat happiness as real numbers, perhaps have a one to ten scale and increase granularity only when it is needed. Make maps from experience to numbers early on, so you can explore specific ones. Consider various ways of combining the various dimensions to measure the “value” of a civilization. Etcetera.
Anyways real talk I guess this counts as a sort of diatribe but this utility stuff is completely sus and it's seriously annoying to be talking about "ethics" or things that "matter" and then play "who can stay in the box the best" within this hella fishy conception. It really, honestly, feels like gaslighting and maybe I'm just impatient but that's how they get you: the realism will come "later," we just need "rigor" first or something "self-consistent" (cmon anything can be "self-consistent" if you add enough variables, why not focus on reality first and consistency later). I feel like a good theory should be good (or not bad) in increments. You take little chunks of it and even if you don't have everything it's immediately useful from the very get-go. It should be relatively easy to explain and on the inflexible side so that interpretation doesn't totally change everything (for example, one way to do is is to have recommended actions be inflexible and reasons be flexible, since the intrepretation is focused on an area where it doesn't strongly affect the outcomes of your theory; there are other ways, but you should limit the intrepretability to places where it won't make the theory so broad that it might as well be meaningless, like this utility stuff, where it's either super broad or completely detached from reality, or as is usual: both). Lastly, we should require these theories to stay close to reality, just as Anteus's strength required him to stay close to the ground .
 And probably most if not all of utilitarianism. (return)
 Forgive me for using meaningless terms, but I’ve had to base all this on some meaningless assumptions. (return)
 Basically, that you should have falsifiable theory, not some weird “axioms up to interpretation” bullshit where you hide your always-moving assumptions behind so many layers of “logic” that everyone and your mother will stop listening except for the idiot Savants. (return)
 Inevitably, because of confirmation bias or whatever bias I am harboring at the moment I’ll have them mean what I want them to mean, not what they actually should mean. (return)
 Same problem as above. (return)
 That’s actually why the thought experiment where you are making a decision approach appears to be more promising than just “area under the happiness curve” bullshit to understand what’s going on, though, once again, reality? Is anybody home? (return)
 Since most “reasonable” theories are not actually useful if they aren’t born of reality, and most of the ones we are coming up with aren’t. (return)
 Hahaha kids these days. Central planning. Yikes. Why do we pose these moral questions assuming we are an entity that is basically all powerful and not ever as an entity more realistic in its scope. Does it help elucidate our values? Probably not. (return)
 As they always are nowadays. Everyone would prefer to be precisely wrong, than vaguely right. (return)
 Anteus was a son of Poseidon (I think) who was immensly strong at wrestling if he was touching the ground. If he ever got off the ground he would lose all his strength. In a wrestling match with Hercules, Hercules realized Anteus' weakness and lifted him, strangling him in midair as he was powerless. (return)