What exactly is artificial intelligence (AI)?
It may sound like science fiction, and often conjures up images of machines who become just as smart, if not smarter, than humans. To describe it more practically (and accurately), AI refers to the use of computers and data to solve problems and, in some cases, make decisions.
A range of industries use AI for numerous purposes, including healthcare and the automotive industry. For example, artificial intelligence has been explored as a way to help doctors make better diagnostic and treatment decisions; by using data about patients, artificial intelligence can predict what drug might be more effective for a given treatment. AI has also become increasingly popular and common in our daily lives: just think of how prominent Alex and Siri are in our day to day activities.
Given how prominent AI is becoming in our world, an important ethical question emerges: do we trust AI to do what we need it to? A new study published in the International Journal of Human-Computer Interaction has attempted to quantifiably answer this question.
Researchers from the University of Tokyo attempted to quantify people’s perceptions of AI by developing an “octagon measurement,” which was used to rate people’s perceptions towards AI in several categories, such as fairness, privacy, accountability, and responsibility. Researchers then sent four scenarios to respondents for them to judge and analyze based on these eight categories: AI-generated art, customer service AI, autonomous weapons, and AI in crime prediction.
Along with collecting demographic information, researchers found that the risk posed by these different scenarios was viewed more negatively by women, older individuals, and those with subject matter knowledge of AI, a trend that is consistent with prior work. That is, these demographics had less favorable views of AI. Researchers also noted that there was much more anxiety and hesitation towards AI being used in weaponry, a finding that was also not unexpected.
The study team hope their evaluation scale could be used in the future to better gauge public perception of AI technologies and help fill the gaps in understanding that persist about what people think about AI versus what’s true.