Social media, misinformation and libraries
At the beginning of this year Altay et al. (2023) published a literature review on misinformation with the thesis that current narratives about social media and misinformation are alarmist, and could be construed as a ”technopanic”. This article raised interesting points about many studies of social media relying on digital traces and analysis of “big data”, leading to a focus on numbers and patterns, with little nuance regarding the actual effects on human behaviour. It also made the valid point that social media is not the only vehicle of misinformation in the community, however several limitations stood out.
The first is the use of the term misinformation with no definition. Generally, in addressing misinformation it is acknowledged that it exists on a spectrum, ranging from misunderstanding and rumour, through sensationalist and slanted reporting, to actual false information. This leads to a distinction between the unintentionally incorrect – misinformation – and the intentionally false – disinformation (De Paor & Heravi, 2020, p.3). This distinction is important as it may affect how you respond to the information provided.
Secondly research is being conducted in the area of how social media affects behaviour. A recent review by Lorenz-Spreen et al. (2023) looks at how social media affects democracy, finding among other things that social media participation can increase democratic participation and it can increase distrust of government, depending on the context of the user and the government they experience. They remark that “digital media can implement architectural changes that, even if seemingly small, can scale up to widespread behavioural effects” and these changes can include changes to their algorithms.
A study by Ribeiro et al. (2020) examined a radicalisation pathway on YouTube from channels that discuss controversial subjects, through to “alt-right” channels, which they define as a segment of the White Supremacist movement (p. 2). By analysing user activity, they documented the changes in YouTube recommendations over time that lead a viewer to increasingly extreme content. As these recommendations are algorithm based this implies there is something in it which rewards extreme content. Without explicit statements from YouTube it is impossible to fully understand how our feeds are shaped, and what degree of authority a source should be attributed. However, what was not analysed was what users said over time, so there is know way on knowing whether speech or behaviour was radicalised, and this would be an interesting area for further research.
Our exposure to news online also is shaped by algorithms, and a quote from Jaeger & Taylor (2021) resonates in this context
“algorithmically shaped sets of stories are not necessarily related to the truth” (p. 24).
When we see the news via a Chrome feed, or other similar source, we do not know what news sources have been reviewed. To demonstrate, try conducting a search for something recently in Australian news on Google, then restrict by using the News button at the top of the page. Today I conducted a search for “Sydney fire”, the first five distinct news sources used were the ABC, Illawarra Mercury, 9News, Australian Financial Review, and The Guardian. Spot the glaring omissions… where are news.com.au or the Sydney Morning Herald? These are amongst the news sources that were most vocal in the campaign for Google to pay for using their news reports, and it is speculated that they are being penalised as a result. Their exclusion, or demotion down the list of sources, leads to distorted search results, a form of misinformation.