JUL 25, 2018 12:44 PM PDT

Computer Model May Put An End To Uncivil Online Conversations

WRITTEN BY: Nouran Amin

 

The internet serves as a platform for the potential of constructive dialogue and cooperation. Unfortunately, online conversations more than often degenerate into personal attacks. To combat such an issue and avert personal attacks, researchers at Cornell University developed a model that can predict when conversations might turn negative. When researchers analyzed hundreds of discussions between Wikipedia editors, they were able to design a computer program that can scan and detect warning signs in the language used by conversation starters and conversation responders at the beginning of an online discussion For example, repeating, direct questioning, or use of the word "you" -- to predict if civil conversations might go awry.

According to the study, early exchanges that include greetings, expressions of gratitude, and peaceful discussions such as "it seems," with terms like "I" and "we" were most likely to remain positive and civil. "There are millions of such discussions taking place every day, and you can't possibly monitor all of them live. A system based on this finding might help human moderators better direct their attention," explains Cristian Danescu-Niculescu-Mizil, an assistant professor of information science and co-author of the study "Conversations Gone Awry: Detecting Early Signs of Conversational Failure." "We, as humans, have an intuition of whether a conversation is about to go awry, but it's often just a suspicion. We can't do it 100 percent of the time. We wonder if we can build systems to replicate or even go beyond this intuition," says Danescu-Niculescu-Mizil.

The developed model considered Google's Perspective, the machine-learning tool that works to evaluate "toxicity," and was found to be correct around 65 percent of the time where humans were correct 72 percent of the time. The research study examined 1,270 conversations that started civilly but derailed into personal attacks, which was generated from 50 million conversations across 16 million Wikipedia "talk" pages; the online space where editors can freely converse over articles and other issues. The researchers were able to analyze exchanges in pairs and to compare each discussion that ended badly including one that succeeded on the same topic, in order to ensure that the results weren't skewed by sensitive topics such as politics.

Scientists have hopes that this model can be utilized to rescue conversations at-risk and to improve online discussions instead of banning specific users, blocking, or censoring certain topics. "If I have tools that find personal attacks, it's already too late, because the attack has already happened and people have already seen it," explains Justine Zhang, a co-author of the study. "But if you understand this conversation is going in a bad direction and take action then, that might make the place a little more welcoming."

 

Source: Cornell University

About the Author
  • Nouran is a scientist, educator, and life-long learner with a passion for making science more communicable. When not busy in the lab isolating blood macrophages, she enjoys writing on various STEM topics.
You May Also Like
DEC 16, 2020
Technology
Advancing Weather Predictions
DEC 16, 2020
Advancing Weather Predictions
Today, we depend on powerful computers to make the most accurate predictions of our coming weather. However, these compu ...
DEC 30, 2020
Neuroscience
Brain Imaging Predicts Risk of PTSD
DEC 30, 2020
Brain Imaging Predicts Risk of PTSD
Until now, why posttraumatic stress disorder (PTSD) develops in some and not others following a physical and/ or psychol ...
JAN 27, 2021
Technology
Electric Car Revolution: Implementing Sufficient Charging Stations
JAN 27, 2021
Electric Car Revolution: Implementing Sufficient Charging Stations
Charging stations seem to be the main deterrent for people not to choose to drive an electric vehicle. Despite the vehic ...
FEB 13, 2021
Technology
Skin Patch Serves "All-In-One Purpose"
FEB 13, 2021
Skin Patch Serves "All-In-One Purpose"
Engineers have worked on skin patch that that can serve as the “one-in all health purpose”. "This type ...
FEB 01, 2021
Technology
Computer Science and The Brain
FEB 01, 2021
Computer Science and The Brain
The world is seeing an increase in students interested in computer science and the age of students beginning to code is ...
FEB 19, 2021
Technology
AI Decodes Genome Biology
FEB 19, 2021
AI Decodes Genome Biology
Scientists at the Stowers Institute for Medical Research, along with partners at Stanford University and Technical Unive ...
Loading Comments...