→ Artificial Morality
Bruce Sterling on AI ethics:
In the hermetic world of AI ethics, it’s a given that self-driven cars will kill fewer people than we humans do. Why believe that? There’s no evidence for it. It’s merely a cranky aspiration. Life is cheap on traffic-choked American roads — that social bargain is already a hundred years old. If self-driven vehicles doubled the road-fatality rate, and yet cut shipping costs by 90 percent, of course those cars would be deployed.
…
Technological proliferation is not a list of principles. It is a deep, multivalent historical process with many radically different stakeholders over many different time-scales. People who invent technology never get to set the rules for what is done with it.
Add a comment