Do you ever stop to think about how big of a role artificial intelligence plays in helping us make decisions (or, for that matter, making decisions for us)? A lot of it is often invisible; predictive analytics and algorithms, for example, play a big role in our life, shaping what we see and what we use to make decisions.
But do you think we’re ready for technology to help us with some of those more existential, hard-hitting questions? Or, perhaps, to make some decisions for us? Say, making decisions that require an ethical, uniquely-human nuance? A new study published in the Journal of Behavioral and Experimental Economics took a look, and the results are interesting.
Specifically, researchers looked at human reactions and interactions with autonomous, self-driving cars to better gauge how we might (or might not) be more ok with artificial intelligence making some of those tough calls for us.
As part of the study, researchers conducted two separate experiments. In one, researchers shared a scenario with about 529 participants: a driver of a car had to choose whether an unavoidable crash would affect one group of people or another. The participants were told the driver was either human or an AI, and used the responses to gauge potential bias.
In the second experiment, researchers looked more closely at how people responded to the idea of autonomous cars, gauging their reactions to a scenario in which autonomous cars were made legal and another where they got to vote on the issue.
While assessing the responses by a human driver and an AI driver to the crash scenario, researchers did not notice a particular preference. When asked directly whether either driver should be able to make the ethical call, though, there was a stronger preference of the human drivers over the AI drivers. Researchers suspect a few reasons may be behind this preference.
First, researchers believe that individual participants were conflating their beliefs with what they believed society as a whole believed about AI: there is a general lack of desire for AI decision making in vehicles, so their personal views reflected that.
Second, researchers found that in countries where people had more trust in their government, there is better decision making and information processing happening around these kinds of decisions as opposed to countries where governmental trust is lower.
Overall, the aversion to AI decision making is not inherently a problem of individual belief, according to the study Instead, it tends to be a result of what people believe the broader society believes.
Sources: EurekaAlert!; Journal of Behavioral and Experimental Economics