Morality seems to be irrational. Moral agents spread co-operation -
this is good for all, but even better for the amoral. If "the
virtuous" finish last, one cannot defend morality as rational.
"Artificial Morality" addresses and answers this objection, by
showing how to build moral agents that succeed in competition with
amoral agents. Professor Danielson's agents deviate from the
received theory of rational choice. They are bound by moral
principles and communicate their principles to others. The central
thesis of the book is that these moral agents are more successful
in crucial tests, and therefore rational. Why design agents? Human
agents and the situations they create are too complex for an
investigation of the most elementary aspects of rationality and
morality. Danielson uses instead robots paired in abstract games
that model social problems, such as environmental pollution, which
reward co-operators but even more those that benefit from others'
constraint. It is shown that virtuous, not vicious, robots do
better in these virtual games. This book should be of interest to
those working in the fields of philosophy, artificial intelligence
and computer studies.
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!