Previous Blog Entry Next Blog Entry

Computers in Libraries 2008: Findability: Information Not Location

This session was presented by Mike Creech and Ken Varnum from the University Library University of Michigan. 

Each of the individual libraries’ at UM had developed its own website independently, so it was difficult for users to find what they needed to find due to the information silos the organization had unwittingly created.  There were 33 sites served from lib.umich.edu with about 52,000 pages of content.  They get 3,000,000 page views per month with just over 500,000 users each month.  As a result there was no branding or sense of place.

They decided to take a phased approach by creating consistent navigation and branding for all library websites, bringing consistency to the layout, and extending the MLibrary brand.  They also deleted pages if they were out of scope or had very little usage vs. the size of the site (# of pages).  They did still preserve local autonomy over the look and feel of each site’s content during the transition. 

The first phase was as follows.  Their long term goals were to engineer a more effective search solution, break down the silos of information, create an ordered and consistent navigation scheme, and provide users an opportunity to build pathways that they didn’t see.  They conducted focus groups with faculty, staff, librarians, and various user bases asking them what they like and dislike about the site, what they liked about other websites, etc.  People reported that they thought the site was far too busy and that they had a hard time searching and also a hard time finding known information.  They had also conducted an online survey asking people experiential and web usage questions.  They also put up a one-question survey that would randomly appear to users on random pages on the library’s website.  The survey asked: "What did you come to this page to do?" or "Why do you come to the library’s website?"  They got 8,000 responses over 2 weeks.  They broke out the most-used phrases and words that people responded with – "search," "find," etc.  They are doing continued analysis to see if, based on where people were on the site, people responded in a consistent way that might point to an overarching need or a problem with the site.  They also analyzed their website statistics through Google Analytics and they engaged conversations with MLibrary stakeholders. 

The second phase was as follows.  They wanted to get rid of the information silos, but want to make sure that people know where the information is coming from (its context).  The purpose is to foster community and improve findability.  They also worked at syndicating their content to allow other people to do things with their information that they hadn’t thought of yet.  They had buy-in automatically from administration.  They established advisory groups for the process of the redesign, including an Approach group, a User Interface Design group, a Information Architecture group, a Technology group, and a Faculty group.  The administration did not want this to be a committee-driven process, however, so the groups were only advisory and the web team itself made the decisions.  They kept people up to date on what was going on through the email newsletter and a team blog with the group’s work and progress. 

They have also developed an in-house social bookmarking tool called MTagger, which is on the old/current site and that will migrate over to the new site as well.  The tag cloud is on all of the library’s webpages, is in the catalog, in the digital image library collections, and scholarly publishing collections.  It has been up for about a month and they have close to 1,000 tags and 300 people who are tagging.  Not only are things tagged, but its source (its collection, original library) is preserved so you can look for those tags to see the context of the information.  It has helped users and staff make connections across the information silos that they didn’t know were there. 

Anything you can search in the new site is an RSS feed.  All of their APIs are available to the public for further development. 

What next?  They are working on picking a CMS: it’s between Drupal, Mambo, and a couple of others.  This is a critical piece of their site’s success.  They are going to go with an open-source search tool, but haven’t picked one yet.  UFind, LibraryFind, and BlackLight are the three contenders.  They are still working on information architecture and graphic design and need to migrate the content and applications into the new CMS.

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.