Tragedy of the Commons Issues LO26871

From: Mark W. McElroy (
Date: 06/23/01

Replying to LO26862 --


Thank you for your response. I had not considered the possibility of ToCs
arising in artificial environments such as AI, but I wonder if we should
not simply regard them as extensions of the human world, since they are
nothing more than human artifacts created BY humans usually in an effort
to mimic their own capacity for reasoning. Perhaps it should not surprise
us, then, that the simulations of ourselves that we create exhibit many of
our own worst behaviors, not just the desirable ones. Faulty reasoning is
still faulty, whether it be found in its native or simulated form.

Turning back to the natural world, I see from the quote you provided that
agents in a ToC situation need not be human, but that is precisely the
claim I am trying to confirm. Where are the cases of non-human agents
behaving in ways that lead to the consumption of shared resources? Or is
the author of those words incorrect? It seems to me that technology plays
a pivotal role in the production of ToCs, and that humans are uniquely
qualified to bring such factors into play like no other creature can.
Again, where is the evidence to the contrary?

I agree with your intra-organizational cases of ToCs. Seems like ToCs can
be found both within and between human social systems.



D P Dash wrote:

> "The tragedy of the commons only requires that agents holding a resource
> in common be rational, not that they be human. Consequently, a Distributed
> AI (DAI) system will also fall prey to the problem when it relies on a
> common resource shared among its agents. The system's resource will also
> be at risk when external agents (human or otherwise) utilize it."


"Mark W. McElroy" <>

Learning-org -- Hosted by Rick Karash <> Public Dialog on Learning Organizations -- <>

"Learning-org" and the format of our message identifiers (LO1234, etc.) are trademarks of Richard Karash.