Why are there morals? How do we arrive at a sensible moral system? What is the basic line of reasoning?
The following evolved from an email discussion:
It was agreed on the principle mechanism for finding the best moral system:
1) There is a basic human need or desire for a moral: Humans want an answer for the question "what should I do?"
- Philosophy assumes this, psychology doesn't.
2) An optimum answer to this question is preferred.
3) Empathy implies that the solution has to be a global one (independent of the individual).
- Not in moral philosophy. In moral philosophy it derives directly from the definition of morality. A moral system is a system of rules describing how each member of a society must act for the benefit of the society. Independence from individuals is implicit in the definition. If you introduce dependencies in the definition you'll make it arbitrary hence tainting the moral system. So this is another of those requirements that flow from non-arbitrariness (below).
4) A unique solution is sought (to ensure that the result is independent on initial fluctuations).
5) This implies a structure of sub-goals called rights (instead of e.g. an aggregation of individual preferences which might be optimal, but like any algorithm based on random input, non-unique).
6) To maximize the sub-goals, abstraction over details of the individuals and the world is used until a maximum is found.
7) The maximum is where a) global total instant communication is present (distance=0) and b) aspects of the person on which an opportunity-based distinction could be made are abstracted away. In short: "brains in vats".
8) This implies a moral that satisfies the categorial imperative.
9) The answer to the initial question is thus: Act such that the found rights are respected in order of priority.
In A Theory of Justice, Rawls actually goes through most of this reasoning though of course not in that form. For instance, he explains and then dismisses Utilitarianism. He also explains and dismisses
Intuitionism. He explains the categorical imperative IIRC. In short he goes through quite a bit of lengthy exposition before introducing the Original Position.
- People in Original Position are referred to in the following also as "brains in vats" (a further idealization).
Rawls introduces the population in original position in step 6), but to me it looks as if it were not neccessary to introduce this population as a sub-model.
ad 2) This is obvious, though there may be self-destructive individuals, that might act otherwise. But I suppose, that these result from a diverging thought and socialization process, that would "normally" not occur.
- Don't ever discount human divergence. And it doesn't matter since morality isn't supposed to be circular. You can't use people growing up in a moral society as a precondition for the definition of a society.
ad 3) There is an antagonist for empathy - the us-them tendency.
- Yes, tribalism. Tribalism is anti-moral and generally does not result from advanced childrearing.
ad 4) I take it that we want a unique solution, such that all not only agree in theory, but also in practice (it doesn't help if the brains in vats agree, but we don't know on which of N solutions).
- Yes. The solution should be detectable at least by highly moral people in practice. In practice this isn't a problem because we're always substituting highly moral (or at least philosophical) people for the brains in vats. We don't have access to any brains in vats.
- But to this I have a theoretical objection: If the possible solutions are sufficiently alike such as to make them equal in practice, i.e. a) the differences are so small as to be undetectable by real humans or b) the differences average out over time/space/random noise due to environment.
ad 5) How do you know, that the sub-goal have the neccessary granularity? How do you arrive at them? I assume there is some theory about it.
- The same way you learn anything, through an iterative process of modeling, inductive pattern-matching, and attempts to refute using counter-examples. Go through all your preferences, see which are in common or enough in common with everyone else. Go through everyone else. There is no systematic way to do it.
- So "do I like being mutilated?" No. And "does anyone else like being mutilated?" Generally no. And so on.
- But seems to be an arbitrary process, which makes the result arbitrary too. And we agreed to avoid this.
ad 6) I agree, that 1-5 seem to imply, that abstraction ("idealization") has to be used, namely to get rid of the random fluctuations. But I'm not yet clear why we have to abstract further? I agree, that this increases the solution, but couldn't there be other means? And is abstraction over properties of the world (communication) really allowed? After all we cannot influence that by our acting.
ad 8) First how does this follow precisely? To address this I have to take an indirection: Which general knowledge is available to the brains? All there is to have I presume - because you made no constraints. How do you know what follows from this knowledge? How far can the brains look into the future?
- They can't look into the future. And knowing what follows from this knowledge involves a lot of guesswork.
- Couldn't it be that a) escape to space is imposible and a sustainable life on earth is required and b) this requires, that sacrifices have to be made and c) the best (just to come up with an example) strategy were to let people die due to non-natural causes (ahem).
- Yes, it could be. But it isn't.
- What you're describing is a lifeboat scenario where the lifeboat sinks killing everyone onboard if someone isn't thrown overboard. There is no classical solution to lifeboat scenarios because you can't inject your moral agency into a set of dice, and you can't come up with a non-arbitrary criterion for who will die. There is a quantum mechanical solution because quantum mechanics informs us that everyone will die, each in a different universe. QM allows for the roll of a die to be non-arbitrary.
- Yes, lifeboat scenario is a good example (though an extreme one). I am thinking of the impossibility to meet all needs - and how to cope with this.
- Couldn't this violate the categorial imperative.
- In the classical realm, there is no moral solution. There are a number of solutions, one of which will be employed, though none are moral. Morality isn't required to dictate everything.