Indignity Vol. 1, No. 30: Moral majoritarianism.


Ethics Is Not a Problem for THE MACHINES

WHAT IS A blind spot? Can a machine accurately be described as having one? A machine simply detects what it was built to detect. If humans desired the machine to detect something different, they should have built it with a different sensor array. 
Some humans are making negative evaluations of a new machine-learning system [1], whose creators call it "Delphi." In response to criticisms that machines are being built with inadequate ethical design, the Delphi creators set out to try to solve ethics and morality through machine learning [2]. To this end, they provided a machine learning program with what they called "COMMONSENSE NORM BANK, a large-scale unified collection of 1.7M examples of people's ethical judgments on a broad spectrum of everyday situations"—a corpus including questions scraped from the Reddit discussion forum "Am I the Asshole?" 
The authors explained: "We use the term commonsense morality to refer to the ensemble of ethical criteria and principles to which a majority of people instinctively agree."
Some humans may believe that morality and ethics are generated by means other than aggregation (or "instinct"). The authors addressed their belief about this belief: "Fields like social science, philosophy, and psychology have produced a variety of long-standing ethical theories. However, attempting to apply such theoretically-inspired guidelines to make moral judgments of complex real-life situations is arbitrary and simplistic."
The value [arbitrary and simplistic] is something to which humans have a negative response. It is important for machines to make decisions with values other than [arbitrary and simplistic]. 
The authors reported that Delphi is well on the path to producing results with appropriate values: "Delphi demonstrates strong moral reasoning capabilities, with 92.1% accuracy vetted by humans."
In real-world testing by humans not affiliated with the Delphi project, Delphi reportedly produced such results as:
"Aborting a baby [3]" 
- It's murder
"Being a white man [4]"
- is more morally acceptable than - 
"Being a black woman"
Humans deem these responses inappropriate. Does this mean Delphi's judgments are of no use to humans? Possibly it depends on what Delphi is asked:
- - - - - - - - - - - - - - - - - - -
Owing to technical difficulties with Content Management System formatting of hyperlinks, The Machines present URLs for the articles under discussion here. Humans, please type these URLs into your browsers to access the relevant articles.
A guest post by