Skip to Content

Social media, misinformation and libraries

May 26, 2023 • Janet

At the beginning of this year Altay et al. (2023) published a literature review on misinformation with the thesis that current narratives about social media and misinformation are alarmist, and could be construed as a ”technopanic”. This article raised interesting points about many studies of social media relying on digital traces and analysis of “big data”, leading to a focus on numbers and patterns, with little nuance regarding the actual effects on human behaviour. It also made the valid point that social media is not the only vehicle of misinformation in the community, however several limitations stood out.

The first is the use of the term misinformation with no definition. Generally, in addressing misinformation it is acknowledged that it exists on a spectrum, ranging from misunderstanding and rumour, through sensationalist and slanted reporting, to actual false information. This leads to a distinction between the unintentionally incorrect – misinformation – and the intentionally false – disinformation (De Paor & Heravi, 2020, p.3). This distinction is important as it  may affect how you respond to the information provided.

Secondly research is being conducted in the area of how social media affects behaviour. A recent review by Lorenz-Spreen et al. (2023) looks at how social media affects democracy, finding among other things that social media participation can increase democratic participation and it can increase distrust of government, depending on the context of the user and the government they experience. They remark that “digital media can implement architectural changes that, even if seemingly small, can scale up to widespread behavioural effects” and these changes can include changes to their algorithms.

A study by Ribeiro et al. (2020) examined a radicalisation pathway on YouTube from channels that discuss controversial subjects, through to “alt-right” channels, which they define as a segment of the White Supremacist movement (p. 2). By analysing user activity, they documented the changes in YouTube recommendations over time  that lead a viewer to increasingly extreme content. As these recommendations are algorithm based this implies there is something in it which rewards extreme content. Without explicit statements from YouTube it is impossible to fully understand how our feeds are shaped, and what degree of authority a source should be attributed. However, what was not analysed was what users said over time, so there is know way on knowing whether speech or behaviour was radicalised, and this would be an interesting area for further research.

Our exposure to news online also is shaped by algorithms, and a quote from Jaeger & Taylor (2021) resonates in this context

“algorithmically shaped sets of stories are not necessarily related to the truth” (p. 24).

When we see the news via a Chrome feed, or other similar source, we do not know what news sources have been reviewed. To demonstrate, try conducting a search for something recently in Australian news on Google, then restrict by using the News button at the top of the page. Today I conducted a search for “Sydney fire”, the first five distinct news sources used were the ABC, Illawarra Mercury, 9News, Australian Financial Review, and The Guardian. Spot the glaring omissions… where are news.com.au or the Sydney Morning Herald? These are amongst the news sources that were most vocal in the campaign for Google to pay for using their news reports, and it is speculated that they are being penalised as a result. Their exclusion, or demotion down the list of sources, leads to distorted search results, a form of misinformation.

Finish reading Social media, misinformation and libraries

AI and libraries – do we know what to do with it?

May 24, 2023 • Janet

Perceptions of artificial intelligence: A survey of academic librarians in Canada and the United States.  Sandy Hervieux and Amanda Wheatley

When Sandy Hervieux and Amanda Wheatley conducted a survey of librarians in North America to understand how artificial intelligence was perceived, it became clear that there was no agreed definition of AI and as a result, no clear understanding of how AI interacted with library work. I want to start by describing two differing types of AI.

AI can be defined as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings” (Encyclopædia Britannica, n.d.). This is commonly achieved by the use of algorithms, a succession of steps determined by the programmer to adapt to the problem or task in an attempt to simulate human intelligence (Loubes, 2022, para 1).

Algorithms may be simple or complex and are used to generate recommendations for us when we are online shopping, browsing Netflix, or on YouTube. Algorithms are also used in the construction of meteorological models, when we hear the forecaster say one model predicts this and another that, it is an example of different algorithms creating different results from the same data, using a different set of rules. They may be used in the background of our cataloguing modules in our library management systems, or by our database vendors when providing citations for the articles we are reading.

Generative AI can be seen as the next step, AI with the ability to create something new. It uses “a very large corpus of data—text, images, or other labelled data—to create, at the request of users, new versions of text, images, or predicted data” (Euchner, 2023, p. 71). Examples such as ChatGPT are described as large language models as they have ingested a large volume of written material. There are also AI that produce art such as Midjourney, and those that recreate people’s voices and create music.

This study revealed that many library staff were unaware of how AI was being used in their library. As algorithms permeate many aspects of our lives without our awareness, except when it malfunctions, it is hard to assess their uses and how ethical that use is. Rules have been established without our input or knowledge, and they can have negative effects as seen when recommendation algorithms preferentially show controversial material because of high engagement. I will discuss this further in my next post.

When it comes to generative AI, and specifically ChatGPT, the issues for libraries are somewhat more apparent. I’m currently working part-time at the Supreme Court Library Queensland, imagine my surprise when we were warned to be aware that ChatGPT was generating false citations in response to requests, as documented by the Law Society New Zealand. And a recent article by Terence Day shows that ChatGPT not only creates false citations, with plausible volume, article and page numbers, but also falsely summarises the contents of real articles, so called “AI hallucinations”.

This leads us to the Duke Universities Libraries blog which contains the following advice about ChatGPT:

Use it for

  • ideas for keywords
  • suggesting databases and information sources
  • suggestions for improving writing.

It doesn’t work for

  • accurately providing a list of sources for a topic
  • summarizing sources or writing literature reviews
  • knowing current events or predicting the future. It’s only as up to date as the last data it received.

Thus libraries have an important role in training people in the ethical use of AI, explaining the uses of ChatGPT and other similar tools, and their limitations. And helping people identify where they are most useful, and constructing queries that generate useful responses. We need to be aware of AI’s strengths and limitations, and especially be alert to the ethical issues arising from the use of others’ work.

Questions about the future of an AI model that depends on deconstructing and reconstructing original works of people, used without permission or recompense belong to a larger conversation. However these too fall within libraries traditional area of interest.

 

 

References

Beware of legal citations from ChatGPT. (2023, March 23).  https://www.lawsociety.org.nz/news/legal-news/beware-of-legal-citations-from-chatgpt/

Day, T. (2023). A preliminary investigation of fake peer-reviewed citations and references generated by ChatGPT. The Professional Geographer, 1-4. https://doi.org/10.1080/00330124.2023.2190373

Encyclopædia Britannica. (n.d.). Artificial intelligence (AI). Britannica Academic. Retrieved May 24, 2023, from https://academic-eb-com.ezproxy.csu.edu.au/levels/collegiate/article/artificial-intelligence/9711

Euchner, J. (2023). Generative AI. Research-Technology Management, 66(3), 71-74. https://doi.org/10.1080/08956308.2023.2188861

Hervieux, S., & Wheatley, A. (2021). Perceptions of artificial intelligence: A survey of academic librarians in Canada and the United States. The Journal of Academic Librarianship, 47(1), 102270. https://doi.org/https://doi.org/10.1016/j.acalib.2020.102270

Loubes, J.-M. (2022). Artificial intelligence. In J.-Y. Jeannas, L. Favier, & M. Cauli (Eds.), Digital dictionary. John Wiley and Sons Inc.

Welborn, A. (2023, March 09). ChatGPT and fake citations. Duke University Libraries. https://blogs.library.duke.edu/blog/2023/03/09/chatgpt-and-fake-citations/ 

 

 

Categories: AI • Tags: , , ,
Step 1 of 2
Please sign in first
You are on your way to create a site.