The Data-Driven Library part 2

On Wednesday, Dec. 11, Library Journal hosted the second of a three-part series on using data to drive decisions in an academic library. The webcast is available for viewing, and here’s a summary.

The webinar contained two presentations:
1. “Making usage data useful,” about benchmarking electronic resource use within a multi-type library consortium in North Carolina.
2. “Impact of adding a discovery layer on top of eResources,” about the impact of discovery-service implementation on usage of journals from 6 publishers, at 4 groups of 6 libraries, each group using a different discovery service.

1. The North Carolina consortium was interested in determining what sort of eResource use they should be seeing and what libraries could do to improve use of eResources.

They identified peer groupings and ended up with 6 peer groups within public& private college & university libraries, 7 within community colleges, etc.). The groups were determined by gathering NCES data. Budget, how much spent on eResources, FTE, Carnegie classifications. following previous benchmarking studies.

They studied 5 databases: Academic Search Complete, MasterFILE Complete, WSJ, LearningExpress Library, SimplyMap. Obtained guidance from vendors on which data to use. For the article databases, the studied variable was “full-text item views.” For LearningExpress Library, “products added to user accounts.” For SimplyMap, “sessions.”

Framework for usage benchmarks:
Analyze and report qualities of “high usage” libraries
Such as
Access & Authentication – what variety of access methods?
Awareness & Outreach – was recent training provided?
Content and Collections – what other databases perhaps compete for patrons’ attention
Community characteristics – population size
Library characteristics – how many computer workstations?

Lessons Learned: Different libraries, different ideas about use. Libraries are paying attention to data. Different resources, different patterns of use.

2. Measured whether discovery service affects journal usage, not why that effect exists.
4 major discovery tools: Ebsco, ExLibris, Summon, WorldCat Local. 24 libraries selected for this phase. 6 for each of the 4 major discovery services. Tried to have similar libraries, all implemented within 2 years of each other. Researchers used “COUNTER JR1” total full text article views for 12 months prior to 12 months after implementation date.

Results in descriptive statistics:
8 of 24 platforms/publishers had less journal usage.
No discovery service increased usage for every publisher.
Variation by institution within each discovery tool
Variation by publisher within each discovery services
Some publishers saw overall increase while some experienced decrease

Overall descriptive statistics helped to set up the inferential statistical tests. During inferential tests, researchers decided to look at total usage change (raw numbers) rather than percentage change. One caveat, higher-use journals may have had greater effect on means than was ideal.

Benchmarking: across publishers vs across institutions vs. background change (usual change is 5% usage increase per year). In this study, researchers didn’t examine libraries without discovery service, as control.

Analysis preliminary; still waiting for thorough review to be sure it is robust.

Used ANOVA F ratio (ratio of Average variability due to the factor, to Average variability due to chance error). If F = close to 1, means not distinguishable, when F is significantly >1, there are real differences among some means.

Findings:

Yes, usage change varies across libraries. On average 8.5 more uses per journal the year after implementation

Significant differences in usage change across publishers? No. Error bars make it appear that one publisher had a greater increase in usage than all the others, and one journal had much less usage, but the F-test shows that there is greater than 33% chance that the differences are due to chance rather than real underlying differences.

Does usage change (average increased use per journal) vary across discovery services? Primo (12.3) & Summon (15.0) institutions had greater increase than EDS (4.5) or WCL (3.7) institutions.

Does the effect of discovery service differ across publishers? yes, but the effect of discovery service on usage change varied across publishers, and no discovery service increased or decreased usage across all publishers

Still need to compare usage change across libraries and compare to those with no discovery service as control.

Also, there could be other factors in journal usage change. Instruction, Site of search origin (discovery service vs. publisher directly). A: this is a reason to bring in control group. Could be usage up across the board.

To run a similar study: COUNTER has consortial reports.

Advertisements
This entry was posted in Academic Libraries. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s