That mostly sounds fair enough, as far as it goes. If by "decision space" you're talking about decision theory, I don't know much about it. But the mapping in question is consistent with my point that a mere (likely scientistic) metaphor is at play. Fictions (idealized, oversimplified models) can be mapped onto nonfictions or onto the phenomena. They're not identical and we're liable to idolize the models, to mistake the map for the territory. I see that you're not inclined to make that mistake, which is fine.
I'm not sure, though, the context here is just the source of morality. My article critiques libertarianism, and my point there wasn’t that moral values don't arise from the worth of individuals; rather, I was saying that the libertarian principles and narratives are inadequate foundations and justifications of social Darwinian policies.
Your point about the need to maximize the probability of achieving our values through social coordination certainly seems abstract enough to capture what happens in moral reasoning. But is that capacity for instrumental rationality the foundation and justification of our moral judgments? Are some actions made right or wrong only because we can ponder our preferences in relation to those of other people?
No, instrumental rationality is our tool for deciding which values and goals to prefer. The source of morality is the anomalous creativity involved in having such ideals in the first place.
Coordination is required for compromises in a social setting, but that seems more relevant to political or to economic values than to moral ones. The reduction, then, would be of morality to political and to economic modes of calculation, the latter two being infamously amoral. This reduction leaves out the existential, religious, and artistic dimensions of morality, the leap of faith in the vision of a better world.
As Adam Curtis’s recent series of documentaries points out, that’s just the vision that’s left out of the bureaucracies that would resort to a scientistic, technocratic mapping of evaluations onto empirical explanations, to a mapping that would hide the madness and vanity of our prescriptions. That mapping is scientistic because there’s an abyss between the preeminent values of morality and the matters of fact that science is fit to explain.
But we want to pretend there’s a science of morality and we call it “decision theory” or “game theory.” In tinkering with those theories, we’re liable to forget the magnitude of our hypocrisy. We presume we’re more rational and business-like than we really are, more forward-thinking, responsible and in-charge than we could hope to be.
Yes, garbage-in, garbage-out. But I’m talking about the social effects of the presumptions behind the use of these tools. Sometimes the medium is the message, so it’s not all about bean-counting or the collecting of data. We adapt to our environments, and when we surround ourselves with scientistic instruments in our neoliberal bureaucracies that are dominated by economic, egoistic concerns, we degenerate, becoming infantilized consumers or mechanical drones, which is the opposite of acting morally.
This is, though, inspiring me to write on what seems like the Orwellian aspect of this hyperrational approach to morality.