JUL 25, 2018 12:44 PM PDT

Computer Model May Put An End To Uncivil Online Conversations

WRITTEN BY: Nouran Amin

 

The internet serves as a platform for the potential of constructive dialogue and cooperation. Unfortunately, online conversations more than often degenerate into personal attacks. To combat such an issue and avert personal attacks, researchers at Cornell University developed a model that can predict when conversations might turn negative. When researchers analyzed hundreds of discussions between Wikipedia editors, they were able to design a computer program that can scan and detect warning signs in the language used by conversation starters and conversation responders at the beginning of an online discussion For example, repeating, direct questioning, or use of the word "you" -- to predict if civil conversations might go awry.

According to the study, early exchanges that include greetings, expressions of gratitude, and peaceful discussions such as "it seems," with terms like "I" and "we" were most likely to remain positive and civil. "There are millions of such discussions taking place every day, and you can't possibly monitor all of them live. A system based on this finding might help human moderators better direct their attention," explains Cristian Danescu-Niculescu-Mizil, an assistant professor of information science and co-author of the study "Conversations Gone Awry: Detecting Early Signs of Conversational Failure." "We, as humans, have an intuition of whether a conversation is about to go awry, but it's often just a suspicion. We can't do it 100 percent of the time. We wonder if we can build systems to replicate or even go beyond this intuition," says Danescu-Niculescu-Mizil.

The developed model considered Google's Perspective, the machine-learning tool that works to evaluate "toxicity," and was found to be correct around 65 percent of the time where humans were correct 72 percent of the time. The research study examined 1,270 conversations that started civilly but derailed into personal attacks, which was generated from 50 million conversations across 16 million Wikipedia "talk" pages; the online space where editors can freely converse over articles and other issues. The researchers were able to analyze exchanges in pairs and to compare each discussion that ended badly including one that succeeded on the same topic, in order to ensure that the results weren't skewed by sensitive topics such as politics.

Scientists have hopes that this model can be utilized to rescue conversations at-risk and to improve online discussions instead of banning specific users, blocking, or censoring certain topics. "If I have tools that find personal attacks, it's already too late, because the attack has already happened and people have already seen it," explains Justine Zhang, a co-author of the study. "But if you understand this conversation is going in a bad direction and take action then, that might make the place a little more welcoming."

 

Source: Cornell University

About the Author
Doctorate (PhD)
Nouran is a scientist, educator, and life-long learner with a passion for making science more communicable. When not busy in the lab isolating blood macrophages, she enjoys writing on various STEM topics.
You May Also Like
Loading Comments...