[ETL503 – Module 5: Evaluating Collections]

Explore at least two of the following sources and note the types of analytical data and models of collecting data that are presented.

  1. Karen Grigg, Chapter 9: “Assessment and evaluation of e-book collections” 

“Libraries are now using means such as balanced scorecard, circulation and usage statistics, survey measures, focus groups, and identification of strength areas of the institution as methods to ensure that book collections are vital and relevant.” (p. 128)

  1. Usage data
  2. Overlap analysis
  3. Survey instruments
  4. Benchmarking
  5. Focus groups
  6. Balanced scorecard method

Usage data:

  • collecting and interpreting the data from circulation. Things to look for includes
    • which subject areas are borrowed the most from
    • whether texts with illustrations are more likely to be borrowed than others
    • any preference for standalone volumes vs series
    • whether theoretical or practical texts are more popular
    • for eResources, whether the number of people seeking to borrow a text exceeds the agreed-upon number in the licence (meaning there’s high demand for that title)
  • Challenges with using usage (circulation) data include
    • too many inconsistencies in how data on online or eResources are collected (and these statistics are managed by the vendor, not the librarian)
    • the type of usage allowed for eResources, such as a single-user model when a title can only be borrowed by one person at a time, rather than multiple (in such cases, potential borrowers are turned away and may not return, making it hard to gauge just how popular a text is)
    • issues with ascertaining how useful an eResource is, as the usage data doesn’t reveal this (it may have simply been promoted more in the search results) (p. 129)

Overlap analysis:

  • “Overlap analysis can query, for each database in a library’s collection, the number of titles in that database that are unique to that database and the number that are available elsewhere in the library’s collection.” (p. 130)
  • used mostly for databases, but possibly with ebook ‘packages’ to “compare titles by subject area, looking for both gaps and overlaps.” (p. 130)
  • helps locate duplications and free up budget to fill gaps
  • Limitations include:
    • duplicates within an e-book package can’t be ‘cancelled’
    • the copies (duplicates) may not be equal, but slightly different versions, meaning the librarian still has to evaluate each “for such factors as ease of navigation, inclusions of graphs and illustrations, and potential negative financial consequences of cancellations that may be imposed by the vendor” (p. 130)

Survey instruments:

  • methods of surveying users (the people borrowing the resources)
  • allows librarians to “evaluate” the impressions users have of the collection, including the e-book collection. User can note how they feel about “the increasing availability of e-books, what e-books they have used, and what the users perceive as gaps in the collection.” (p. 130)
  • must be used alongside other methods of data collection, as the results are not comprehensive. Only the most engaged will usually participate, so you won’t hear from those who under-use the resources in order to find out why

Benchmarking:

  • comparing a library’s collection to another library’s collection (p.131)
  • in selecting another library, it should be comparable by size, similarity of subject areas covered, and budget
  • search and study the other library’s catalogue, its “scope and holdings” (p. 131).
  • or network with the other library, to find someone who will provide some data

Focus groups:

  • instead of the quantitative data from user surveys, focus groups allow for “qualitative querying” (p. 131).
  • groups should be diverse
  • discussion based, with open-ended questions and “creative brainstorming” (p. 131)
  • can be used to evaluate an e-book publisher’s site – its navigability, collections and format
  • Limitations:
    • lack of anonymity may cause self-censorship (p. 132)
    • incentives may be required, which can cause some people to attend simply for a free lunch (but not to offer any useful insights)
    • they require a skilled mediator
    • groups of staff can be hard to organise due to conflicting schedules

Balanced scorecard method:

  • “a strategic approach and performance management system that can be employed to identify critical success factors and translate these into performance measures that can be tracked over time.” (p. 132)
  • includes have outcome measures – the point at which the librarian decides the collection is serving its purpose
  • results are only as good as the measures used to collect the data (p. 132).
  • challenge to identify realistic measures.

2. Amy Hart, chapter 3: “Collection analysis: powerful ways to collect, analyze and present your data.”

Assess the collection by

  1. Dewey Decimal number (p. 88)
    • how many titles does the library have in each classification?
    • enter data into a spreadsheet in order to analyse it
  2. date of publication (p. 88)
    • to determine currency/relevance
    • however, it’s important to note that some subjects do not date as much as others do
  3. circulation statistics
    • “By comparing the composition of the collection (how many titles in each Dewey class) to circulation counts by Dewey class, we could learn where supply was meeting demand.” (p. 89)
  4. then make many busy, incomprehensible and illegible graphs and charts to impress higher-ups

Activity: Consider models and methods for collection evaluation which may effectively relate to the learning and teaching context, the needs of users and the school library collection within your school, or in a school with which you are familiar.
  • What are the practicalities of undertaking a collection evaluation within a school in terms of time, staffing, and priorities, as well as appropriateness of methodology.
    • as there is often a shortage of time, collecting and analysing data could actually be spread out over the year. (For instance, weeding can be done at any time in the year, while user surveys could be done at the end.)
    • using the information management system to run reports on usage is an ideal place to start. 
    • as noted in Grigg (2012), any model of collecting data should be used in conjunction with one or two others – such as surveys and overlap analysis.
    • ‘benchmarking’ is possibly the least useful, as it would be hard to find another school like mine in the whole state (I can think of one, LC, that comes close but is smaller)
    • we have the staff available, but not all staff have the skill set (yet); they/we need training
  • How does the need for, and possible benefits of an evaluation of the collection outweigh the difficulties of undertaking such an evaluation?
    • As difficult and time-consuming it is to undertake a collection evaluation, it’s always necessary. It should be considered a necessary part of the job. Otherwise the shelves get clogged with old, dusty, irrelevant texts, new subject areas are under-resourced, the reading interests of the student body are not understood, and online tools and resources such as databases take up big chunks of the budget yet may not be utilised.
    • In discovering what resources are under-utilised (such as our Wheeler e-book platform; only 4 e-books were borrowed in 2021), the TL has an opportunity to promote such resources to see if user patronage can be driven up or, if it really is out-of-touch with users, to remove it altogether. This would free up space in the library (including online) and budget for other areas.
  • Is it better to use a simple process with limited but useful outcomes, or to use the most appropriate methodology in terms of outcomes?
    • I think it’s a combination, surely. We use what is available, which may be limited but is better than nothing. What we use, though, would be appropriate for the context – not sure I understand the bit about outcomes. It almost seems to imply that you use a methodology that will give you the result you want, rather than what’s actually happening.
  • What are the current priority areas for evaluation in your school library collection?
    • The non-fiction/reference sections
    • promoting e-books to students
    • database usage (EBSCO etc.)

References

Grigg, K. (2012). ‘Assessment and evaluation of e-book collections’ in R. Kaplan (Ed.), Building and Managing E-Book Collections. American Library Association. https://ebookcentral.proquest.com/lib/csuau/reader.action?docID=1158439&ppg=144 

Hart, A. (2003). Collection analysis: powerful ways to collect, analyze, and present your data. In C. Andronik (Ed.), School Library Management (5th ed.) (pp. 88-91) Worthington, Ohio: Linworth. https://primo.csu.edu.au/discovery/delivery/61CSU_INST:61CSU/12131785030002357