Fork me on GitHub

Plutoware delimited : Glossary of Cooperative Games

Glossary of terms

The following terms are related to cooperative game theory and cooperative economics. Definitions are drawn from A Cooperative Species : Human Reciprocity and Its Evolution By Samuel Bowles & Herbert Gintis, 2011 (pdf).

This document is for educational purposes only.

Noncooperative games

Cooperation is possible in non-cooperative games but is achieved only by players playing rational strategies, based on self-interest, only. This type of cooperation is perhaps more accurately deemed "coordination." In noncooperative games it is not possible to coordinate through agreements or enforcement.

Cooperative games

In cooperative games it is possible to make a binding agreement. Agreement can be tacit or explicit. What makes it possible to bind the agreements is some mechanism (possibly outside the game) that enforces the agreement and makes them have consequence, such as

Cooperative games can utilize noncooperative mechanisms (that are coordinated purely on the basis of self-interest), and in addition can also use mechanisms that are not allowed in noncooperative games based on agreement. Rules may be encoded in cultural institutions, or agreed upon via social norms that may be understood but not written down anywhere. Conduct may be enforced either by a third party (such as a police force), or by members of the group (such as individuals acting on their own conscience, or by members of the group that take it upon themselves to enforce group norms).

Evolutionary games

The key idea in evolutionary game theory is differential replication over many rounds rather than best response over one or more rounds.

Successful strategies in an evolutionary game are those that make more than the average number of replicas in the next period either because

Coordinating device

Also known as a coordinating signal, this is a mechanism that coordinates the behavior of cooperative agents. Examples:

A correlating device is something that sends out signals, private or public, to the players of a game, indicating which strategy each should play. The idea is that each player chooses their action according to their observation of the value of the same public signal.

In game theory a correlated equilibrium is a situation in which there is a correlating device such that, if all players follow the advice of the correlating device, no player can do better by switching to an alternative strategy.

A correlated equilibrium is more general than the well known Nash equilibrium. It turns out that correlated equilibria are computationally more practical to work with. Recent advances in algorithmic game theory indicate that based on computational feasibility correlated equilibrium is the better choice for game theory in general.

Self-interest axiom

The self-interest axiom says that people seek to maximize their expected payoffs and believe that others do the same

Sometimes this axiom is defended as being self-evident, with the fallback assertion being that natural selection could not have produced any other kind of preferences.

“The first principle of economics is that every agent is actuated only by self interest” (1881). -- F. Y. Edgeworth, a founder of neoclassical economics

Cooperative economics questions this assertion, claiming that while self-interest is a good explanation of some human behavior, it cannot explain all human behavior.

Alice, Bob, and Carol

Alice and Bob are fictional characters commonly used as placeholder names in economics and game theory.

Free riders

A person free rides if he benefits from the contributions of other group members while himself contributing less or nothing at all. A shirker is an otherwise trustworthy group member who temporarily avoids performing their civic duty. Shirking is a milder form of free riding.

Reciprocity

Strong reciprocity

Strong reciprocity is when people sacrifice their own payoffs in order to cooperate with others, to reward the cooperation of others, and to punish free-riding, even when they cannot expect to gain from acting this way. The term “strong” intended to distinguish this set of preferences from entirely amoral and self-regarding reciprocation that would not be undertaken in the absence of some payback.

Indirect reciprocity

Indirect reciprocity occurs when Carole is likely to punish Alice when Alice has been unfair to Bob, and is likely to reward Alice when Alice has been nice to Bob.

Strategic reputation-building

Strategic reputation-building occurs when Carole behaves cooperatively only when her actions are seen by others, and hence can help build a reputation for social behavior.

Altruism

Because the strong reciprocator would increase his game payoffs by not cooperating, the motives for behaving this way are considered to be "altruistic". This does not necessarily mean "kind" -- a group member that is punishing a second individual for violating group norms might use violent means to do so -- the altruism here is for the benefit of the group, not the individual being punished.

Preferences, Beliefs, and Constraints

The beliefs, preferences, and constraints model is a key tool in economics and decision theory. According to this approach, what individuals do when restricted to a specific set of feasible actions depends on their desires and goals on the one hand, and their beliefs on the other.

Constraints represent the limitations placed on the feasible actions an individual may take in a given situation.

Beliefs are an individual’s representation of the causal structure of the world, including the relationship between the individual’s actions and the probabilities of the various possible resulting outcomes.

Preferences are the pro or con sentiments that make up the individual’s valuation of the various possible outcomes of taking an action. Preferences may be described as a preference function giving an ordering of the states of the world that may result from one’s actions.

Preferences satisfy two conditions:

  1. they are complete (any two states can be compared) and
  2. transitive; that is, consistent, so that if one prefers A to B and B to C, one then prefers A to C.

Preferences are the results of a variety of influences: tastes (food likes and dislikes, for example), habits, emotions (such as shame or anger) and other visceral reactions (such as fear), the manner in which individuals construe situations (or more narrowly, the way they frame decisions), commitments (like promises), internalized norms of ethical behavior, psychological propensities (for aggression, extroversion and the like), and affective relationships with others.

An individual’s behavior can be succinctly and analytically summarizes as maximization of a preference function.

To say that individuals act on their preferences means that knowledge of these preferences provides a concise and accurate account of their actions, given their beliefs and constraints .

A version of the beliefs, preferences, and constraints model, incorporating the behavioral assumptions sometimes summarized as Homo economicus, has become standard not only in economics but throughout the human behavioral sciences.

Self-interest need not be part of the preferences, beliefs, and constraints approach. Preferences could be altruistic or even masochistic.

Social Preferences and Social Dilemmas

Social preferences are defined as the concern for the well-being of others and a desire to uphold ethical norms.

Self-regarding preferences

Self-regarding preferences are based on states concerning oneself alone. Self-regarding preference can be hard to detect, but can surface in some ways, for example by someone experiencing anxiety in a culturally unfamiliar interaction.

Other-regarding preferences are based at least in part on states that occur to others, valuing the well-being of other people in the group. An other-regarding player cares about not only his own payoff, but that of other people as well.

Ethics vs Morals

Ethics and morals both relate to “right” and “wrong” conduct. Ethics refer to rules provided by an external source, such as workplace rules or religious principles. Morals refer to an individual's own principles regarding right and wrong.

Ethical commitments may reflect a concern for the states experienced by others, but need not.

Self-esteem is dependent in part upon what others think of us. We attempt to favorably impress others as a means of raising our subjective self-esteem.

Having social preference means being “unselfish” and “non-self-interested”.

Examples of games

dictator game: Alice gives a certain amount of money to Bob, who has no say in the matter.

third-party punishment game

Games typically have multiple variants that can tease out nuances in behavior. There is a variant on the dictator game that proceeds as follows:

  1. Carole, the “third party,” has an endowment of 50 tokens and observes Alice’s transfer.
  2. After this Carole can assign punishment points to Alice.
  3. Each punishment point assigned to Alice costs Carole one token and Alice incurs a penalty of three tokens.
  4. Because punishment is costly, a self-regarding Carole will never punish.
  5. However, if there is a sharing norm, Carole may well punish Alice if she gives too little.

trust game :

  1. Alice is awarded a sum of money and given the opportunity to transfer any amount of it to Bob, knowing that the experimenter will triple the amount transferred (if Alice gives x, Bob receives 3x).
  2. Bob then has the opportunity to return some of this augmented sum to Alice.
  3. This ends the game.

Alice is sometimes called the “truster” or “investor,” and Bob the “trustee.”

Social dilemmas

These are interactions in which the uncoordinated actions of individuals result in an outcome that is Pareto inefficient, meaning that there exists some other feasible outcome such that at least one member could be better off while no member would be worse off.

Examples of social dilemmas modeled by game theorists are

Social preferences convert a prisoner’s dilemma material payoff structure into what is called an assurance game payoff structure — each player will cooperate if assured that the other will cooperate as well, and will not if not. Mutual cooperation and mutual defection are both Nash equilibria. Which of the two Nash equilibria will obtain depends on the players’ beliefs about what the other will do.

The private nature of signals is what makes The Prisoner’s Dilemma a dilemma.

Robert Aumann won the Nobel prize in 2005 (shared with Thomas Schelling) for contributions to game theory. He showed that co-operation is less likely

Genes, Culture, Groups, and Institutions

According to gene-culture coevolution, human preferences and beliefs are the product of a dynamic whereby genes affect cultural evolution and culture affects genetic evolution, the two being tightly intertwined in the evolution of our species

Reproductive leveling

Reproductive leveling contributes to the evolution of altruism because it helps prevent altruists being starved of critical resources. Altruists do the right thing at some cost to their own self-regard; therefore, they can end up losing ground in evolutionary games.

Individual differences in size, health, information, behavior, and other influences on access to scarce resources affect reproductive success.

Among some other primates and especially among humans, reproductive leveling attenuates this relationship. Because altruists receive lower payoffs than other group members, they benefit from reproductive leveling because this attenuates the within-group selective pressures working against them.

Parochialism

Parochialism as defined in this book means religious intolerance, racism, xenophobia, which vary across cultures and over time.

Punisher

An altruistic group member that enforces group norms, possibly at some cost to their own self-interest.

Cooperative Nonpunisher

A Cooperative Nonpunishers cooperates to the extent that they abide by group norms, but do not do their part in punishing defectors (free-riders or shirkers).

Opportunists

Opportunists are group members that normally cooperate and normally will do their part to do the dirty work as Punisher; however, when they see an opportunity to get away without having to do some work without being noticed, they attempt to do so.

Second-order free-rider

This is where individuals who cooperate but do not punish outcompete the Punishers. The math says that there must be enough Punishers in the group to tolerate Cooperative Nonpunishers.


Primers

Agent-based modeling

This is a tool for analyzing complex dynamical systems as a complement to explicit mathematical analysis where the latter is either impossible or uninformative.

Such modeling (often called “simulation”) lies outside the two standard methods of gaining scientific knowledge: deduction and induction.

Figure A1: Structure of evolutionary game-based simulation

Agent-based modeling is like deduction in that it starts with a rigorously specified computer program, but it is like induction in that it treats the operation of the program as a set of data points from which generalizations can be made.

If a complex system has emergent properties, these can be ascertained by implementing an agent based model in which these properties are seen and persist over many simulations.

How do we judge the empirical adequacy of an agent-based model?

  1. one can ensure that the parameters chosen are empirically plausible for the populations under study.
    • Sensitivity analysis: check how much difference variations in the parameters make for the results of the simulation.
    • On the basis of this sensitivity analysis then spend special attention making sure that the parameters that matter are well estimated.
  2. Exploit the fact that while the processes under investigation are unknown (which is why we are simulating them), that the following hold:
    • The simulations generate a large number of by-product statistics on aspects of the relevant populations on which we do have some knowledge.
    • Thus we can ask whether the results of the simulation conform to known facts about the populations under study. Meaning, they have some predictive appeal -- the hypothesis is that their data will match that of the real system in 2nd and 3rd order effects not just the immediate.
    • Where the models have generated implausible by-product statistics, diagnose the source of the problem and recalibrate.

Figure A2: Structure of replication process.

Game Theory

Game theory is a mathematical tool for the study of strategic interactions where payoffs of individuals depend on their own actions and the actions taken by others.

On strategy

A player's strategy is any of the options he or she can choose in a setting where the outcome depends not only on their own actions but on the action of others.

A pure strategy determines all your moves during the game, and should therefore specify your moves for all possible other players' moves.

A mixed strategy is a probability distribution over all possible pure strategies, some of which may get zero weight.

A state is a Nash equilibrium if every player’s choice is a best response to the choices of the other players. The choices of all other players are public, known to and understood by every other player.

A dominant strategy offers a higher payoff than any other strategy, no matter what the other players do.

A non-strategic behavior means that it would be considered non-rational according to neoclassical models based on non-cooperative game theory.

Repeated games

A stage game of the repeated game is repeated indefinitely, with a positive probability of terminating the process at the end of each period

The most important fact about the repeated game based on stage game G is that it can support cooperative equilibria in situations where G cannot.

Adaptive agents in evolutionary games

Adaptive agents in evolutionary games adopt behaviors in a manner similar to the way people come to have a particular accent or to speak a particular language, by repeated trial-and-error, possibly by supervised learning, more commonly by reinforcement learning.

Forward-looking payoff-based calculation is not entirely absent. For example those aspiring to upward mobility may adopt upper class accents. But in evolutionary theories of adaptive agent behavior, conscious optimizing is not the whole story.

Example: The answer to “why do you talk that way?” is generally “because this is how people talk where I come from” not “because I considered all the ways of speaking and decided that speaking this way best serves my personal goals,” although it may be for some individuals.

Dynamical Systems

There are two major types of dynamical systems:

  1. a continuous time system using differential equations, and
  2. a discrete time system using Markov chains

An equilibrium of this dynamical system, also called a critical point or fixed point, or stationary point. at an equilibrium, dx=dt = dy=dt = 0, so the dynamical system remains forever at (x; y) once it reaches there. Under what conditions does a dynamical system move toward an equilibrium?

Very few dynamical systems, even simple ones in two dimensions, can be solved analytically, so the paths x(t) and y(t) cannot be written in closed form. Nevertheless, there are well-developed methods for determining when an equilibrium is stable, unstable, or neutrally stable, using tools from algebra and calculus

Markov Chain

A finite Markov chain is a dynamical system that can be in any of n states (s1, …, sn), and if the system is in state i in time period t, it will be in state j in time period t+1 with probability pij . Of course, for this to make sense, we must have pij ≥ 0 for all i, j = 1, …, n, and Sumnj=1 (pij) = 1. Statistical estimates of these probabilities, based on thousands of implementations of our model, for example, are the basis of our calculation of the vector field ... giving the movement of the population among the states indicating various frequencies of altruists and of parochials.

When a Markov chain has the property that the average fraction of time in each state in the long run is independent from the starting state, we say the system is ergodic, and we call the resulting long-run distribution of probabilities the stationary distribution of the Markov chain.

The Replicator Dynamic

The most natural dynamic to apply to an evolutionary game is the replicator dynamic

it can be shown that every equilibrium of an evolutionary game under the replicator dynamic is a Nash equilibrium of the stage game. This shows that the Nash equilibrium criterion remains powerful even without assuming that players are rational (i.e., that they choose best responses) or coordinated.

Maynard Smith developed the stronger notion of an evolutionarily stable strategy: i.e., a whole population using that strategy cannot be invaded by a small group playing any other strategy.


Part 0 : Overview of 14 key lessons of cooperative economics.

Part 1 : Overview of competing alternatives.

Part 2 : Failures of non-cooperative theory.

Part 3 : Evolutionary economics, rise of institutions, and the co-evolution of genes and culture.


≠ ≥ ≤