Simon Fraser University (SFU) Communication Professor Ahmed Al-Rawi’s research examines the intersections between political extremism, misinformation and social media. He leads The Disinformation Project at SFU, which examines fake news discourses in Canadian news media and social media.
He also collaborates with SFU’s Digital Democracies Institute on the use of abusive language on social media. He is a frequent commentator in the news media, recently discussing how social media fuels support for the war in Ukraine while contributing to the spread of mis/disinformation.
One of Al-Rawi’s recent studies focused on the way artificial intelligence (AI) reproduces and promotes prejudice, hate and conspiracy online. For his recent article, How Google Autocomplete Algorithms about Conspiracy Theorists Mislead the Public, he collaborated with Postdoctoral Fellow Carmen Celestini, Master’s student Nathan Worku, and PhD candidate Nicole Stewart.
They looked at the subtitles that Google automatically suggested for 37 known conspiracy theorists and found that—in all cases—Google’s subtitle was never consistent with the actor’s conspiratorial behaviour.
For example, influential Sandy Hook school shooting denier and conspiracy theorist Alex Jones is listed as “American radio host” and Jerad Miller, a white nationalist responsible for a 2014 Las Vegas shooting, is listed as “American performer.”
Al-Rawi stresses that the perceived neutrality of algorithmic search engines like Google is deeply problematic. He argues that subtitling known conspiracists as neutral and not negative can mislead the public and amplify extremist views.
We met with Professor Al-Rawi to discuss his work.
Most internet users perceive Google as a neutral search engine. However, your article mentions some of the biases present in these algorithms. Please describe what is happening here.
Yes, this is exactly the point behind writing this paper. When I first looked at these algorithmically produced labels, I felt there was something very wrong with them, so I proposed following a reverse engineering method to explore further. These labels do not receive enough public scrutiny, unlike the case of Facebook and, to a lesser extent, Twitter. I think search engines are exacerbating the problem of disinformation not only because of these labels, but also due to the affordances they offer people in terms of easily searching for and finding disinformation.
If individuals are well known to be conspiracy theorists, why doesn’t Google identify them as such?
I think it is similar to the issue of social media sites that were very reluctant in the beginning to
de-platform controversial users because of the fear of alienating audiences and/or losing revenues.
What are your recommendations for Google? Are policy-makers paying attention?
Due to the increasing public and official pressure on social media companies, many recent changes happened that made them more active in moderating their sites. I hope the same thing will happen soon with Google's search features.
What are your recommendations for internet search engine users? How can we be more attuned to the inner workings of the internet?
I think we all need to be critical of our online surroundings, and I encourage everyone to search for other well-known controversial figures to see how Google has labeled them. I think we need more insight into what is known as the black boxes of algorithms, and the often-biased rules they follow. The same applies to better understanding social media platforms by following a similar procedure with regards to what hashtags or keywords are allowed or not on different sites.
We all need to be diligent with the content we read online because we cannot take what we view for granted. It is useful—and critical—to continue to question and challenge our sources of information about the issues we care about.
Read more about Al-Rawi's research on Google’s search engine algorithms in the Conversation Canada.
The Disinformation Project has been made possible in part by the Canadian Department of Heritage.
SFU's Scholarly Impact of the Week series does not reflect the opinions or viewpoints of the university, but those of the scholars. The timing of articles in the series is chosen weeks or months in advance, based on a published set of criteria. Any correspondence with university or world events at the time of publication is purely coincidental.
For more information, please see SFU's Code of Faculty Ethics and Responsibilities and the statement on academic freedom.