At the beginning of this year Altay et al. (2023) published a literature review on misinformation with the thesis that current narratives about social media and misinformation are alarmist, and could be construed as a ”technopanic”. This article raised interesting points about many studies of social media relying on digital traces and analysis of “big data”, leading to a focus on numbers and patterns, with little nuance regarding the actual effects on human behaviour. It also made the valid point that social media is not the only vehicle of misinformation in the community, however several limitations stood out.
The first is the use of the term misinformation with no definition. Generally, in addressing misinformation it is acknowledged that it exists on a spectrum, ranging from misunderstanding and rumour, through sensationalist and slanted reporting, to actual false information. This leads to a distinction between the unintentionally incorrect – misinformation – and the intentionally false – disinformation (De Paor & Heravi, 2020, p.3). This distinction is important as it may affect how you respond to the information provided.
Secondly research is being conducted in the area of how social media affects behaviour. A recent review by Lorenz-Spreen et al. (2023) looks at how social media affects democracy, finding among other things that social media participation can increase democratic participation and it can increase distrust of government, depending on the context of the user and the government they experience. They remark that “digital media can implement architectural changes that, even if seemingly small, can scale up to widespread behavioural effects” and these changes can include changes to their algorithms.
A study by Ribeiro et al. (2020) examined a radicalisation pathway on YouTube from channels that discuss controversial subjects, through to “alt-right” channels, which they define as a segment of the White Supremacist movement (p. 2). By analysing user activity, they documented the changes in YouTube recommendations over time that lead a viewer to increasingly extreme content. As these recommendations are algorithm based this implies there is something in it which rewards extreme content. Without explicit statements from YouTube it is impossible to fully understand how our feeds are shaped, and what degree of authority a source should be attributed. However, what was not analysed was what users said over time, so there is know way on knowing whether speech or behaviour was radicalised, and this would be an interesting area for further research.
Our exposure to news online also is shaped by algorithms, and a quote from Jaeger & Taylor (2021) resonates in this context
“algorithmically shaped sets of stories are not necessarily related to the truth” (p. 24).
When we see the news via a Chrome feed, or other similar source, we do not know what news sources have been reviewed. To demonstrate, try conducting a search for something recently in Australian news on Google, then restrict by using the News button at the top of the page. Today I conducted a search for “Sydney fire”, the first five distinct news sources used were the ABC, Illawarra Mercury, 9News, Australian Financial Review, and The Guardian. Spot the glaring omissions… where are news.com.au or the Sydney Morning Herald? These are amongst the news sources that were most vocal in the campaign for Google to pay for using their news reports, and it is speculated that they are being penalised as a result. Their exclusion, or demotion down the list of sources, leads to distorted search results, a form of misinformation.
“Social media like Facebook, Instagram, Twitter, YouTube and TikTok rely heavily on AI algorithms to rank and recommend content” (Menczer, 2021, Sept 20, para 10).
Algorithms count the number of likes, comment and shares on a post, not the content of the post or comments, so a post that attracts a lot of negative attention is treated the same way as a post with many positive comments, there is no qualitative analysis.
The logical question then is, if increasingly extreme content generates more views and comments, and content creators are rewarded for being more extreme, how much effect does this have? This is another area where research is important. It is certainly my impression that it only takes a single view of an extremely conservative video on YouTube for my feed to be dominated by that type of material, and I have to make a concerted effort to view several more moderate or ”liberal” videos before the balance becomes more even.
While correlation is not causation and social media is not the only vehicle for misinformation, observation suggests that social media can act as both a megaphone and a magnet. It enables like-minded people to find each other, and amplifies their voices. This both increases their visibility and makes them seem a larger group, thus attracting more attention and therefore more people to persuade. This is both the negative and positive of social media, while it can be a tool of misinformation it can also be a positive and cohesive social force.
Awareness of how our social media feeds are shaped enables us to take a critical view of the posts and videos recommended to us. This media literacy is a necessary tool for all users and the Australian Media Literacy Alliance research report on media literacy recognised libraries as having the infrastructure and mandate to support media literacy campaigns (Dezuanni et al., 2021). As library staff we need to be aware of how social media recommendations are made and the limitations of our feeds. And by promoting quality sources libraries can be a part of making good information more visible. Through media literacy education and promoting other channels and posts on our library’s social media we can promote quality information and increase its reach. Jaeger & Taylor (2021) make an impassioned case that lifelong information literacy is a role that is vital to libraries and libraries are vital to assisting others to achieve it. As individual library professionals it is important that we skill ourselves in this area.
References
Altay, S., Berriche, M., & Acerbi, A. (2023). Misinformation on misinformation: Conceptual and methodological challenges. Social media + society, 9(1), 205630512211504. https://doi.org/10.1177/20563051221150412
De Paor, S., & Heravi, B. (2020). Information literacy and fake news: How the field of librarianship can help combat the epidemic of fake news. The Journal of academic librarianship, 46(5), 102218. https://doi.org/10.1016/j.acalib.2020.102218
Dezuanni, M., Notley, T., & Di Martino, L. (2021). Towards a national strategy for media literacy: National consultation report. Australian Media Literacy Alliance. https://medialiteracy.org.au/wp-content/uploads/2022/03/AMLA-Consultation-Workshop-Report_UPDATE-25-10-2021-1.pdf
Jaeger, P. T., & Taylor, N. G. (2021). Arsenals of lifelong information literacy: Educating users to navigate political and current events information in world of ever-evolving misinformation. The Library Quarterly, 91(1), 19-31. https://doi.org/10.1086/711632
Lorenz-Spreen, P., Oswald, L., Lewandowsky, S., & Hertwig, R. (2023). A systematic review of worldwide causal and correlational evidence on digital media and democracy. Nature Human Behaviour, 7(1), 74-101. https://doi.org/10.1038/s41562-022-01460-1
Menczer, F. (2021, Sept 20). Facebook’s algorithms fueled massive foreign propaganda campaigns during the 2020 election – here’s how algorithms can manipulate you. The Conversation. https://theconversation.com/facebooks-algorithms-fueled-massive-foreign-propaganda-campaigns-during-the-2020-election-heres-how-algorithms-can-manipulate-you-168229
Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A., & Meira Jr, W. (2020). Auditing radicalization pathways on YouTube. Proceedings of the 2020 conference on fairness, accountability, and transparency, Barcelona. https://arxiv.org/pdf/1908.08313.pdf