In the popular imagination, artificial intelligence (AI) is usually
portrayed as a divine entity that makes “just” and
“objective” decisions. Yet AI is anything but intelligent.
Rather, it recognises in large amounts of data what it has been
trained to recognise. Like a sniffer dog, it finds exactly what it
has been taught to look for. In performing this task, it is much
more efficient than any human being – but this precisely is also
its problem. AI only mirrors or repeats what it has been instructed
to reflect. Seen in this light, it may be viewed as a kind of
digital “house of mirrors”. Humans train machines, and these
machines are only as good or as bad as the humans who train them.
Based on this insight, the publication addresses not only
algorithmic bias or discrimination in AI, but also AI-related
issues such as hidden human labour, the problem of categorisation
and classification – and our ideas and fantasies about AI. It
also raises the question whether (and how) it is possible to
reclaim agency in this context. Text in English and German.
General
Is the information for this product incomplete, wrong or inappropriate?
Let us know about it.
Does this product have an incorrect or missing image?
Send us a new image.
Is this product missing categories?
Add more categories.
Review This Product
No reviews yet - be the first to create one!