USF-Led Researchers Create Algorithm to Promote Trusted, Diverse Content • St Pete Catalyst

A team of researchers led by the University of South Florida may have found a solution to the widespread spread of misinformation on social media platforms, while protecting free speech and encouraging diversity of points of view. view.

The research team was made up of computer scientists, physicists, and social scientists from USF, Indiana University, and Dartmouth College. Together, they sought to determine how social media platforms can ensure that users receive reliable information.

The results of their study were published on February 3 in the academic journal Nature Human behavior. The research focused on the recommendation algorithms that social media platforms use to prioritize content consumed by users. While these algorithms measure engagement through the number of interactions and page views, the researchers focused on the reliability score of news sources and the political diversity of their audience.

“These algorithms, unfortunately, don’t have access to the signals that tell us whether the information source is reliable or not…” said Giovanni Luca Ciampaglia, assistant professor of computer science and engineering at USF . “Because algorithms select what is relevant from what has been shared by our friends, if we somehow end up in what is called an echo chamber, an algorithm will see just something that’s popular with a group of people who already agree.”

Giovanni Luca Ciampaglia, assistant professor of computer science and engineering at USF. Photo provided.

Ciampaglia, who led the study, explained that algorithms and the echo chambers they create make the public increasingly vulnerable to misinformation and misinformation (information intentionally designed to mislead). Meanwhile, the algorithm only recognizes continued engagement, regardless of quality, further amplifying misinformation.

So, the research team created a new algorithm.

Their algorithm uses data on the web browsing activity and self-reported partisanship of 6,890 people. The data, meant to reflect gender, race and political diversity across the country, was provided by online polling firm YouGov.

The researchers also used trust scores from 3,765 widely shared news sources based on the NewGuard Trust Index. The index ranks sources according to several journalistic criteria, including editorial responsibility, accountability and transparency.

Ciampaglia said the team hopes to promote engaging yet engaging content for a diverse audience.

“We saw that the first thing is that popularity doesn’t predict quality,” he said. “We also saw sources read by more diverse crowds – that is, crowds that included both people across the aisle, so to speak – tended to be more reliable. .”

The researchers tested these results with other well-known algorithms through computing and confirmed that incorporating more diversity led to more reliable recommendations. Perhaps most importantly, their algorithm consistently produced relevant content that engaged a diverse audience.

Ciampaglia said the algorithm was particularly useful for people most susceptible to misinformation. He added that the results held for both conservative and liberal news sources, meaning the algorithm was not penalizing one side of the political spectrum to the benefit of another.

“Which is very important for social media platforms because, in a sense, these platforms try not to be biased towards any particular publisher,” he said.

Ciampaglia said the researchers were surprised to find that a post’s popularity had little or no correlation with its quality. He said their surprise was due to the old notion of “wisdom of the crowd”.

He explained that in the marketplace of ideas, many people believe that the most popular and accepted viewpoints are usually the most trusted. Regardless of who pushes the news and consumes it, Ciampaglia said the thinking is “good things are bubbling.” He discovered that this was no longer the case in the digital age.

He believes it is now essential that news sources intentionally reach out to a diverse audience, regardless of their beliefs.

“A more diverse set of readers will keep you more in check, so to speak,” Ciampaglia said.

Ciampaglia recognizes the challenge of convincing social media platforms to change the algorithms that have led to great financial success. He said assumptions that the platforms were essentially neutral and would aggregate information from a wide variety of sources to show what is most relevant and trusted were proven wrong.

The continued practice of simply maximizing engagement carries consequences for users, and social media companies struggle to prevent misinformation from spreading. Some companies are now trying to mitigate misinformation while maintaining current levels of engagement, which Ciampaglia says is a tough proposition.

Ciampaglia said he was moderately optimistic that social media platforms will explore innovative ways to combat the spread of misinformation. He noted Facebook’s recent reversal from a state of denial to committing resources to address the issue.

“Artificial intelligence and machine learning can help,” he said. “But we’re saying that in some sense it could also help integrate our findings and it could help them do a better job.”

About Linda Jackson

Check Also

Higher legal risk for third-party content: Social media companies will challenge more restrictions

Social media companies plan to challenge any changes to the law introduced by the government …