At the University of Portland library we have been running a pre- and post-test assessment in an introductory Business class, BUS 100, since Fall 2011. At that time, the reference and instruction team wanted to incorporate more assessment into our instruction program. We had good relationships with the faculty in that class and we had a track record of meeting with all of the sections. Prior to creating the pre-test we discussed our plans with the Associate Dean for Undergraduate Programs and received permission to move ahead. We also filed and were granted a request for exemption with our Institutional Review Board, as we wanted to be able to publish the assessment project results.
In our assessment planning the reference and instruction team had found an instrument developed at San Jose State University (http://informationr.net/ir/15-3/paper436.html) and another used by Milliken University (http://www.millikin.edu/staley/services/instruction/Documents/08-09CWRRreport.pdf). We had incorporated questions from these surveys into an instrument used in some of our chemistry sessions and used that chemistry-session instrument as a starting point for the BUS 100 survey.
Because the BUS 100 students are all incoming freshmen or transfer students, the pre-test/post-test doesn’t contain any questions about their past experience with library instruction or with doing research at UP. We developed questions more appropriate to the BUS100 library instruction session and assignment. Originally the test was in paper, and a library student worker filled out a spreadsheet with the test data, but in Spring 2013 we created an online test. We gave up a nearly 100% response rate in exchange for easier data analysis. You can see the most recent version of the pre-test here: https://appsone.up.edu/PerfectForms/player.htm?f=dMiggAMF
Generally we find that students’ understanding of library research changes between the pre- and post-test: they have a better understanding of peer review, they say they’ll go to library databases to look for articles instead of Google, and they select a search strategy using truncation and Boolean operators rather than a basic phrase search.
We have struggled with teaching students to look beyond superficial criteria when evaluating websites. One of the questions on the tests used from Fall 2011 to Spring 2013 asks students for the most important item to consider to determine the credibility of an advocacy website, and in post-tests they have continued to say the domain (.org, .edu, .com) is the most important, while we would prefer they consider that the site might present biased information. In Fall 2013 we changed the question to have students rank the options from 1 to 4, where 1 was the highest-ranked option, and we found that “particular viewpoint or bias” received more “1” ratings than the other options.
When reporting the data to the BUS 100 faculty each semester, we copy the Associate Dean for Undergraduate Programs and also the professor responsible for gathering assessment data within the business school.