> I think that arguing that a political stance is "correct" or "wrong" isn't the best frame of reasoning here, because it implies that there might be some sort of formal reasoning process you could use to reach a common conclusion. Whereas humans do now work like that and politics does not work like that.
I disagree with that. In so far a political stance refers primarily to objective reality (e.g. "should we introduce a carbon tax, and if yes, in what form?"), you should be able to reach common conclusion in a group of honest people. That people often don't doesn't mean it can't be done - it's a result of political discussions involving plenty of dishonest or disinformed people.
> There's no absolute morality, and there's no universal set of moral axioms you can get everyone to agree on.
Yes and no. There isn't an explicit, absolute morality, but that doesn't mean morality is a free variable. It doesn't exist in a vacuum, in exist in human brains - which are all the same hardware and mostly-same firmware. If you look at all the societies and cultures, there's plenty of values that are essentially universal. That's not an accident.
> arguing from specific->general in one direction and general->specific
I agree that people generally argue like this, and it's a bit orthogonal (if closer to the focus of daily experience) to what I wrote about. But humans are capable of doing these steps multiple times - e.g. simultaneously compare specifics of a situation against a specialization of a generic rule, and generalize the situation to compare against a set of general values. In so far as we're talking about objective reality, correct generalizations and specializations will converge on a self-consistent picture. And whatever subjectivity is caused by a difference in values has its limits - it's not a license to throw reason away and say that the whole space of all possible opinions is equally valid.
> Each of those is a lossy transformation. (...) But neither is more "correct" in a formal sense than the other.
I am arguing that - that two different lossy transformations can be equally "correct". My projection example is a lossy transformation after all. But this doesn't mean any two objects from the codomain of a lossy transformation are equally correct! Just like a hexagon is not a valid projection of a cylinder, or a pony isn't a valid JPEG compression of a human portrait, some generalizations are wrong, and some specializations are wrong.
> (e.g. "should we introduce a carbon tax, and if yes, in what form?"), you should be able to reach common conclusion in a group of honest people.
Really? Even assuming that everyone agrees on the model of policy responses - introducing tax X has result Y - people will still have strong opinions about the distributive effects of those and even the relative importance of the environmental effects. People don't usually come out and say "I'm not willing to spend a dollar to prevent Miami and Bangladesh being submerged", because that sounds bad, but they come very close to it.
> some generalizations are wrong, and some specializations are wrong.
On this we agree - some models are just delusional "motivated reasoning"; Sandy Hook "truthers" etc.
> Even assuming that everyone agrees on the model of policy responses - introducing tax X has result Y
This is a purely objective question, and can be discussed between honest parties until a common conclusion is reached.
> people will still have strong opinions about the distributive effects of those and even the relative importance of the environmental effects.
This is a mix of potentially subjective values and objective statements about how those values are affected by proposed solution; I argue that even if a common conclusion cannot be reached, we can usually come pretty close to it.
> People don't usually come out and say "I'm not willing to spend a dollar to prevent Miami and Bangladesh being submerged", because that sounds bad, but they come very close to it.
People don't usually think in these categories; between lack of information, misinformation, opportunity costs (no one has time to be up to date on everything, or even to think through everything) and psychological discounting ("I worry about securing food for my children tomorrow, not about some uncertain future 50 years from now"), you end up with people who believe in things that are wrong even when taking their own subjective values into account.
I feel the problem we have isn't with subjectiveness - most political issues are objective enough in principle. We have a computational problem - we can't get enough people to think and talk through issues deeply enough. Instead, people fall back to computationally efficient heuristics - ideologies and soundbites. Pattern matching everything into beliefs like "capitalism is bad", or "government regulation is bad" is a faulty generalization, but it saves on thinking.
It may be that at the scale of our current problems, this computational barrier is in practice insurmountable. If so, we're fucked, and I'm not sure what to do. But it's probably why there's so much pushback against democracy from the "intellectual elites" - because they realize that this problem only scales with the number of people you need to involve to make a decision.
(N.b. trust is another computation-saving hack humanity has used since forever, and what enables us to have societies as large as today. The trust that other people don't try to hurt me, that they have my interests in mind. It's damn effective, and that's why I'm particularly hateful towards individual, company and government activities that erode this trust.)
I disagree with that. In so far a political stance refers primarily to objective reality (e.g. "should we introduce a carbon tax, and if yes, in what form?"), you should be able to reach common conclusion in a group of honest people. That people often don't doesn't mean it can't be done - it's a result of political discussions involving plenty of dishonest or disinformed people.
> There's no absolute morality, and there's no universal set of moral axioms you can get everyone to agree on.
Yes and no. There isn't an explicit, absolute morality, but that doesn't mean morality is a free variable. It doesn't exist in a vacuum, in exist in human brains - which are all the same hardware and mostly-same firmware. If you look at all the societies and cultures, there's plenty of values that are essentially universal. That's not an accident.
> arguing from specific->general in one direction and general->specific
I agree that people generally argue like this, and it's a bit orthogonal (if closer to the focus of daily experience) to what I wrote about. But humans are capable of doing these steps multiple times - e.g. simultaneously compare specifics of a situation against a specialization of a generic rule, and generalize the situation to compare against a set of general values. In so far as we're talking about objective reality, correct generalizations and specializations will converge on a self-consistent picture. And whatever subjectivity is caused by a difference in values has its limits - it's not a license to throw reason away and say that the whole space of all possible opinions is equally valid.
> Each of those is a lossy transformation. (...) But neither is more "correct" in a formal sense than the other.
I am arguing that - that two different lossy transformations can be equally "correct". My projection example is a lossy transformation after all. But this doesn't mean any two objects from the codomain of a lossy transformation are equally correct! Just like a hexagon is not a valid projection of a cylinder, or a pony isn't a valid JPEG compression of a human portrait, some generalizations are wrong, and some specializations are wrong.