You keep assuming that morality is directed towards achieving plus-sum outcomes (winner-winner games with no losers).
I see morality as having to do with the burden of freewill which allows us to be good or bad. Morality is about managing good and bad people differently, and about encouraging everyone to be good. Creating a society in which no one loses would likely involve the loss of freedom.
As they stand now, morality and freedom--the prescriptions needed to manage our freedom to act outside natural regularities, to choose between possibilities not envisaged strictly by biologists, which necessitates psychology, sociology, and the like--may even be obstacles preventing the economic growth you're assuming is our ultimate goal.
Moralizing can made us narrow-minded and retributive. The Islamist terrorists were highly moralistic, as are anti-abortion Christian fundamentalists. People can be obsessed with punishing evil-doers, which hardly seems conducive to a win-win mentality.
You say game theory predicts that societies will regulate themselves to prevent an outbreak of the natural state (a zero-sum game). But morality and honour codes may lead people to prefer the freedom of that natural state, of being allowed to win or to lose.
The US is freer and more moralistic (less pragmatic and realistic) than collectivistic societies, and it allows for much greater wins and losses. Millionaires and billionaires thrive there, and for every massive winner in the US there are tens of thousands of losers.
I'm not saying that's the ideal society. My point is that Americans are proud of that tradeoff. They constantly moralize, proclaiming theirs is the best country in history. So there's an example of morality standing in the way of what you're construing as the moral utopia. Americans are opposed to win-win scenarios, assuming they'd require central planning and a loss of liberties.
I agree, though, with much of what you're saying. Some of it seems tautological. Yes, a society that succeeds will have regulated itself with rules that furnish that outcome. Game theory may predict as much without predicting the cultural content.
The question is whether that kind of regulation is the same as morality or whether morality is included in that dynamic. Take your first of three questions: Will there be rules? That sounds innocuous, but the idea of a rule is revolutionary since it presupposes the freedom to imagine unnatural, ideal possibilities. A rule says what's right, what ought to be done, regardless of whether that outcome is normal, regular, or probable.
Morality isn't reducible to natural regularities. I suspect your game theoretical explanation of morality may be forced to change the subject from morality to something else (such as social planning), because you're trying to naturalize morality.
My last comment's last two paragraphs are relevant here, because they're about the source of values in quasi-scientific explanations. Precisely because prescriptions aren't reducible to descriptions, those who seek to explain morality scientifically or to manage behaviour in line with morality without themselves engaging in moralistic philosophy can only presuppose the values. Science doesn't supply them.
Thus, psychiatrists define mental illness in a way that leverages social norms. If the society tends to disapprove of homosexuality, then psychiatrists assume that homosexuality is "bad," as in dysfunctional. Indeed, gay people would have a hard time fitting into a society that rejects them. But because psychiatrists can't speak in moralistic terms and still think of themselves as scientists (because of the naturalistic fallacy), they have to tap dance around these judgments and presuppose the merit of the social biases.
So I'm wondering whether game theory is in the same boat. The "morality" in question is rather the set of laws and customs that regulates behaviour to avoid the worst outcomes. That's pseudo-morality because it's subject to naturalistic prediction.
Again, real morality is about addressing the freedom that makes us anomalous creatures, that enables us to create and to violate rules, because we're relatively godlike, because we choose to build anti-natural, artificial worlds that indeed are meant to improve on the wilderness. But you're using evolutionary theory that's fit to explain animal behaviour, to explain one of the anomalies that makes us people rather than animals. Why should that be expected to work?
Whether we're dealing with virtue theory, consequentialism, or deontology, the moralist is freely directed by an ideal, by the belief that some things are good and others are bad. What's good or bad isn't the same as what's common or rare, for example. The naturalistic fallacy complicates any such reductive explanation.
Even speaking of morality as a "game" seems presumptuous. I'd prefer to emphasize the existential stakes rather than trivializing them.