Previous Blog Entry Next Blog Entry

I wish I could have gone to both, but I could only make the Internet Librarian Conference here in the states.  Not to fear!  The presentations from ILI 2005 are available now for our self-paced review.  Sweet!

“Internet Librarian International”

  1. Anna Says:

    Do they feature Michael Stephens in nearly every session, too? ;)

  2. Sarah Houghton (LiB) Says:

    I do believe he spoke quite a bit there too. Michael Stephens is a pillar of wonderfulness to the library community.

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Competing with Google: Library Strategies
Stephen Abram, SirsiDynix

Stephen Abram is always amazingly entertaining.  Abram began by pointing to the recent article: “Libraries: Standing at the Wrong Platform, Waiting for the Wrong Train.”  How do we get ourselves into the next century?  How do we admit that we love books but we need a 12 step program to admit that that is only a small percentage of what we’re about.  He gave us a fly-by tour of all the things that Google is offering…just the new things they’ve added this year were overwhelming.

Current questions: Who’s going to buy AOL?  How will Amazon alliances play out?  Google hired a GAIM developer…what does that mean?  What’s the next phase of OpenWorldCat?  What’s next with ILSes (OpenURL, federated search, etc.)? 

Abram noted the problem of Google Bombing, and how librarians specifically hate it as it alters the quality and accuracy of the information people find.

Stephen’s Top Ten Strategies for Competing with Google

#1: Know Your Market
We have an imperative to aggregate our data.  He pointed to the SirsiDynix and FSU Normative Data Project (http://www.libraryndp.info) as an example of this.  Let’s learn from each other, see what items circulate and what doesn’t. 
#2: Know Your Customers Better than Google
Understand users in terms of their needs, preferences, desires, goals, values, beliefs, expectations, assumptions, and tolerance for risk and change.  He points to the SirsiDynix Personas program as an example of this.  Know that young people (our beloved Millennials) are different…their needs and expectations are different.  Plan for their needs 5 years out.  We also need to pay attention to our other populations: seniors, the poor and the working class, the digital divide, ethnic differences, a non-homogenous population.  We need to understand the difference between usability and satisfaction, satisficing vs. meeting real needs, and transactions vs. transformations. 

#3: Be Where Our Customers Are
How much of our usage is in-person now?  Let’s be realistic about how much of our use is remote.  Stephen pointed to IM as a key example of this.  85% of people from 15-25 have at least one IM account.  Only 5% of over 30s do.  Add-ons of voice, co-browsing, etc.  He said what I’ve been saying for two years, which is that these huge web-based chat products need to suck it up and move into the IM environment.  Otherwise, they’re going to be out of a job mighty damn quick.

#4: Federated Searching
Federated Search should not look like Google.  We need to build compelling content…what’s current and what people care about. 

#5: Support Your Culture
The whole world is going from textheads to nextheads.  We need to pay attention to MP3s, streaming media, and voice search.  DVDs will die.  We need to be ready for the next format.  He also mentioned podcasting as “stuff we’re gonna have to do.”

#6: Position Libraries Where We Excel
Google does who, what, where, and when really well.  We can’t compete with that.  We’re good at why and how questions.  That’s where we can excel and build collections around why and how questions (how to stay healthy, how to build a marketing plan).  A library’s core competency is not delivering information.  It’s about improving the quality of the question in the first place (e.g. guided searches, “see also,” etc.).  On our websites we need to move beyond read/view environments into act on/discuss, argue/defend, present/teach environments.  It’s an information ocean, not a highway.

#7: Be Wireless
By the end of 2007, broadband wireless providers need to have secure networks.  We’re seeing low-power-consumption mobile devices.  Devices are getting smaller and are starting to know where you are.

#8: Get Visual
Visual representations of search results…bigger and more central to the display means it’s more popular.  Google News Map is one example of this.  He pointed to Grokker [Yay Grokker]  as another example of visually represented information.  Folksonomies and tagging play into this too.

#9: Integrate
Integrate ourselves into our communities.  There are five specific user communities: Learning, Culture/Entertainment, Research, Workplace, Neighborhood.  Understand these user communities and then build things like portals to reflect the needs of the community.

#10: For Pete’s Sake, Take a Risk!
Stephen wants us to sacrifice our fear of success…not to be afraid of a project doing too well.

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Expert Reviews of Real-World Intranets
Sunny McClendon, CNN Library
Chris Jasek, Elsevier’s User-Centered Design Group
Andrew Donoho, IBM’s Emerging Technology Division
Sheryll Ryan, Human Factors International

For the two sites, I’ll list the tips/tricks that the presenters gave during the course of the site examination.

Site #1: Rodale Library
• Don’t let the header take up too much real estate, especially on an intranet—the site is in-house, you don’t really need to brand
• Avoid redundancy of content and links
• Make sure that there is one obvious and prominent navigation scheme
• Think about what is the purpose of the page, and make the most important parts of the page be the most prominent
• Anchor down the left-hand side
• If you provide an alphabetical list, with a clickable alphabet at the top, make all the content available on that one page—don’t require clicking to get to any content whatsoever.

Site #2: Toronto Star
• Make sure that you have a coherent organization, that things aren’t too split up and spread out…watch out for too much chunking.
• Avoid vague terminology—make sure that headings and links are adequately explained.
• If you have long lists of content, try to organize them with a table of contents or an alphabetical index at the top of the page.
• Be consistent in the fonts you use.
• Have consistent navigation throughout the different pages.

The last speaker discussed that one should use CSS to build stylesheets to govern the overall look, design, and organization of your pages.  Adhere to the simple rules of graphic design.  The top left is the focal point.  Follow left-to right order in order or priority.  Frequency sort your left hand bar of choices.  He then focused briefly on the CNN intranet, but time ran out and we all had to leave to get to the closing keynote on time.

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Evaluating Search Tools
Mary Ellen Bates, BatesInfo.com

Much of what Mary Ellen had to say can apply not only to search engines but also to all of those subscription databases that we are constantly trialing and evaluating.  This was a great session, and one of the most practical “chock-full-o-info” sessions I attended during the conference.

Using Your First Impressions
Don’t start searching/typing away.  What are your first impressions?  Does anything not make sense?  Can you see links to help files, advanced search, about us?  What options are available in the pull-down menus?  What options are available in the advanced search screen?  Does it require registration?  If so, what is their privacy policy?

Tests to Run
Run through a collection of test searches: something fairly current (within last month, but not yesterday), a specific fact (height of Eiffel Tower), quick-look up (URL for Colorado 14ers Initiative), something with common words (how does it handle ambiguity?).  Also search for something that you are personally interested in, consumer-type searches, a search that you would typically run during your workday, field searches (title, inURL, etc.), intentionally incorrect searches (typos, a specific URL, inappropriate punctuation).  Also do phrase searches, one-word searches, 5-word searches, nested logic, filetype searches, domain-limited searches, foreign language searches, country-specific searches (choose a webpage you know is based in another country but does not have that country’s domain name, search by page title and limit by country…does it come up?).  Can you search within your results?  What kind of Boolean logic is available?  What is the default?  What are the options?  Can you sort results by date?  What is the engine searching (open websites, content within databases, human-built directories, blogs, etc.)?  Try it on IE, Firefox, and other browsers.  Do they require a toolbar or plug-in that only works in one browser?  Compare the results with engines you’re already comfortable and familiar with to see how much overlap there is.  Do any of the results seem weird, like they shouldn’t be there?  Be sure to set your search results display default to 50 or more results per page.

Reviewing Results
See what’s included on your results pages: layout, images, text, links, citation or other data about each result, modify your search, related terms, see also, other ways to expand or narrow your results.  If there are paid ads, are they clearly labeled?  How intuitive are the search results?  How current are the search results?  Does it require the use of pop-ups or additional software or plug-ins?  Can you customize the display of the site?  Does it have the must-have features for your audience/organization?  Does it include links from a human-built directory and are they flagged?

Additional Check-List
How did you hear about the site (trusted source, web search, found it accidentally)?  Look for reviews of the site by the people you consider authoritative.  How long has the site been operating (check the Wayback Machine)?  How has it changed over time?  Is it in beta or just out of beta?  If you are reviewing a directory of other human-built resource, what’s the creator’s credibility?  How frequently is it updated?  Look for links to the site, particularly from authoritative people.   

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Fueling Engines for the Future
DeWitt Clinton, A9
David Mandelbrot, Yahoo
Peter Norvig, Google

This panel of three representatives from these very huge and powerful companies was very interesting from the point of view of a regular user (that’s me!).

A9
DeWitt began by telling us that A9.com powers Amazon’s product search.  You log into A9 with your Amazon account.  Results appears in columns (vertical searches).  One of the verticals is “web” and one is “images.”  There are other verticals you can choose: books, movies, blogs, people, etc.  DeWitt noted that most search engine APIs, while proprietary, are very similar so A9 introduced OpenSearch.  OpenSearch is a proposal for a common format for search requests and results, identifies the minimal subset of data necessary for search syndication, and re-uses existing and familiar standards like RSS.  A9 offers about 300 OpenSearch Columns (white pages search, creative commons, Flickr, PubMed, etc.).  OpenSearch was launched in March 2005, has a new search engine added every day, and is Creative Commons Licensed [nice!].  Microsoft is building OpenSearch into the next version of Internet Explorer. 

Google
Peter [who I also saw speak at the California Library Association’s conference in 2004] showed us some of the newer Google features.  One is its direct answer service, where you type in a question, and you get the answer at the top of the results page.  He also mentioned what Greg did in the last session, the “See results for…” a similar search to your search, listed half-way down the results.  Google is also offering statistical machine translation for translating materials from one language to another.  They’re averaging 1-2 disfluencies [umm, is that a word?] per sentence.  He showed us Google Maps where you can drag things around and view satellite images.  He showed us some examples of people doing interesting things with Google APIs, including Placeopedia which links Wikipedia place articles to the locations on maps [this seems pretty cool, and is something I personally haven’t seen before]. 

Yahoo
David described some of the newer features from Yahoo.  Yahoo’s search ethics can be described as FUSE (Find, Use, Share, and Expand) human knowledge.  Yahoo is partnering with for-pay content providers and allowing users to personalize what results they get from for-pay providers (like the Wall Street Journal).  He also noted that they provide a search specifically for Creative Commons content (also mentioned in the last session).  Yahoo launched My Web 2.0, allowing user to save, tag, and annotate pages they’ve found useful and to share that with other users.  Searchers can then narrow their results to things that others in their communities have found useful.  Finally, he discussed Yahoo’s participation in the Open Content Alliance, a joint effort of international organizations to build an open and permanent digital archive (competing with Google Library).  They plan to offer the full text rather than snippets, like Google does.  It will be freely crawlable and include both multimedia and text content. 

Questions from the audience
• Someone who works for NIH asked what these 3 companies are doing to include publicly funded research (open access scholarly literature) in their engines.  Peter noted Google Scholar, David noted that Yahoo has partnered with NIH to get live feeds from their database for their search engine, and DeWitt stated that A9 is hoping that content providers like the NIH will use syndication formats like OpenSearch to allow clients anywhere to access their content).
• Someone asked about copyrighted scholarly material and how to create access to these materials.  Greg responded that it’s a bigger problem than search engines can address.  David responded that a lot of publicly available material seems to be available only in licensed for-pay databases, and that is a problem.  He also added that the Open Content Alliance will start making data available this year, completely for free, and that’s one small step in the right direction.
• Someone asked the panelists if any of their new initiatives have made something they had in the past seem less relevant.  DeWitt stated that OpenSeach will probably overturn many things that search engines have right now.  David noted that Yahoo’s Directory is being made irrelevant by people’s tendency to search not browse.  Peter responded that the tabs at the top of the page are becoming less relevant (sorting by image, etc.) are not visible to many searchers, and that something that looks like an image query will go straight to the image tab instead of making the user click on the tab.
• Gary Price noted that the same day Yahoo announced Yahoo Subscriptions, Gale announced a similar service (Access My Library) that would put Gale content into Yahoo databases and make access for library users available with a log-in.  Peter noted that Google is trying to get that kind of information as well.  David mentioned Yahoo’s partnership with OCLC as well with Open WorldCat.
• Someone asked about the satellite data in Google, and what the latency period is as some of the data seems to be a couple of years old.  Peter admitted it is spotty, and that coverage of metropolitan areas is better.  DeWitt noted that A9 is covering new cities all the time and refreshing older stuff.
• Someone asked whether it would be difficult to mark the satellites with the dates the images are from.  Peter replied that this is certainly a good idea. 

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Search Engine Update
Gary Price, ResourceShelf
Greg Notess, Search Engine Showdown

Presentation available at: http://www.resourceshelf.com/ilwebsearch05.html

Greg and Gary spoke to a completely packed room—500 people or so perhaps.  Greg started by stating that he always starts with something other than Google.  It lends some credibility to his searching skills to the patrons, as if they see he’s starting with Google they’ll usually just say “oh, I’ll just do that for myself.”  Gary then spoke briefly about an article he wrote recently about people trying to get their information out of Google, and reminded us that it’s not just getting things out of Google, but all of the other search engines as well. 

AskJeeves
They showed us all of the “did you mean” references they give you…and direct links to expand or narrow your search and find related items.  AskJeeves does cache pages and display the date they were cached.  A search for the Beatles in AskJeeves brings back a brief biography from Who2, links to photos, products, music, Wikipedia, and more related names.  They also provide “binoculars” which link to a static image of the what their page looks like when they cached it last.  The two complaints about AskJeeves are 1) its name and 2) the quality of it.  Gary notes that AskJeeves has really improved its product in the last five years and that it’s dramatically different.  Others complain about the number of ads, but Greg notes that other search engines are worse in this category.

MSN
They have their own unique database, they provide a cached version with a date, a query building “Search Builder,” free access to Encarta, and Virtual Earth.  Both Gary and Greg note that they do a better job keeping things fresh than the other engines do.  The Encarta access truly is completely free for two hours every time you do a search, but you do just get the text content for free—not the multimedia.  The Search Builder is great to use when teaching searching…it helps you limit by domain, country, language, etc., without the searcher having to know search syntax.  It also provides the sliders that allow you to limit and guide your search even more.  MSN Earth plans for 3-D images of specific buildings and streets in their next release.

Yahoo
There is a plainer search page at: http://search.yahoo.com, without all the extra Yahoo content found on their main page.  Yahoo allows you to customize the tabs that display on your page (maps, images, video, etc.).  Yahoo is also caching pages but does not display the date it was cached.  They do have a link to the Internet Archive Wayback Machine entry for the page.  The advanced search form gives you the option to conduct a Creative Commons search to limit to content licensed under a Creative Commons license (hurrah!).  Greg notes that so far, the majority of material that he finds this way tends to be from blogs, but expects that to change in the future as more people use CC licenses.  Yahoo also offers a search subscription, which gives you to some content from licensed databases like Factiva.  Yahoo is also offering a blog search (in beta) and their news search indexes many more sources than Google News does.  Gary praised Yahoo’s sliders that allow you to sort your own results based on your preferences.  For example, if you choose desktop computers in Yahoo Shopping, and then alter the sliders based on how much you want to spend, how important processing speed is, etc., then the results and the descriptive snippets will change based on your slider preferences.  They also showed us Yahoo Mindset with a slider between shopping and researching.  Gary also noted that mobile is becoming a very big thing in information retrieval online, and Yahoo allows you to send map and directory information to your cellphone via SMS.

Google
Google is offering quick-links below search engine results to the site’s most popular pages (e.g. ALA’s Banned Books Week page).  Greg also notes that sometimes partway down the results list, they’ll say “did you mean X” and then list the top few results for that search.  Greg says he’d rather have that information displayed at the top of the results page.  Google’s blog search isn’t really a blog search (says Greg) because it actually searches RSS feeds, not blogs.  They’re also having some branding issues with Google Print (the publisher project) vs. Google Library (the project with libraries).  Both Gary and Greg said that they’d like Google to be more open about the features they’re offering, the processing they’re using, so that we can use their tool better.  Gary says that if you clear your Google cookies on a regular basis, you’re more likely to see the new features. 

A9
A9 has a subset of Google’s database, but 200 other databases (including Amazon’s book database).  You can limit to what results you want (e.g. books, Answers.com, etc.).  The Search Inside the Book feature offers more books than Google Print does.  The results come back in del.icio.us-like folksonomies, showing the most frequent words inside the book.  The A9 maps (block-level views) allow you to see photographs of buildings (interior and exterior) –http://maps.A9.com.  The A9 Toolbar allows you to add the search functionality of any search box to the toolbar (e.g. the LII search box).

Exalead
They updated their interface last week and expanded their number of pages covered from 1 billion to 2 billion pages.  Features they offer include related terms, thumbshots, related categories, ability to limit by specific filetypes (pre-limited by the types of files found for the search you ran), searching with truncation, proximity searching, and phonetic searching.

Gigablast
Indexes 2 billion pages.  There’s an auto-link to the Wayback Machine, custom topic search pages, and more.

Others to keep an eye on: Rollyo.com (using Yahoo for subject searching), RedLightGreen (union catalog that uses FRBR, links into local libraries), Topix (huge news database with over 200,000 topic groupings and 12,000 sources, RSS feeds), and Findory (personalized news).

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Google: Catalyst for Digitization?  Or Library Destruction?
Roy Tennant, California Digital Library
Rich Wiggins, Michigan State University
Adam Smith, Google

Rich began by showing a slide about why Google should never be bought by Microsoft (showing the nasty little Clippy [die, Clippy, die!] pointing someone who wanted to find Wikipedia to instead go to Encarta (& asking for a credit card).

He discussed digital library projects up until now…many “cream of the crop” projects—
small projects of easy to digitize materials.  Why not expand this to larger projects, like everything in the Library of Congress?  The problems with this would include storage space, which depends on what resolution you’re scanning at, color depth, etc.  Rich estimates that for the Library of Congress would require 20 terabytes of digital storage.  Storage is getting cheaper, broadband delivery is cheaper, digital imaging is cheaper, labor or automated labor can be cheap as well.

What would it cost to do something like this?  As of a few years ago, it was estimated at 40 cents per page, but Rich guesses that it could now be (with the right technology) as little as 5 cents per page.  If you compare this cost to the cost of cataloging and storing physical materials, digitization is much cheaper.  OCR is becoming a lot better.  We could digitize everything, and then only OCR it when someone requests the material.  [I think this is an AWESOME idea, and one I hope people are really considering.]  The Internet Archive and Google use cheap storage in RAID arrays, which comes out to only 50 cents per gig. 

If it’s worth keeping an item in the collection, it’s worth digitizing the item.  Rights management is a huge barrier, one that Google has engaged.  Rich points out, though, that right now obscure print titles sit on the shelf and don’t get read, but if it’s digitized it becomes more discoverable and the authors will probably get more royalties.  [This has been said many many times and I agree with it 100%.]  Rich recommends against only digitizing the good stuff, as figuring out what the good stuff is would be much more expensive than just digitizing everything. 

What are some of the benefits of Google Print?  Preservation, access (particularly for small communities), digitization technology is improving, the creation of new digitization standards (e.g. XML mark-up), forcing the issue of large-scale rights management.

Rich analogizes Google Print to the Apollo Program: we should embark upon projects like Google Print not because they are easy, but because they are hard.  Google Print’s vision is to digitize the most important books in the global corpus.  They’re encountering challenges in technology, institutional delays, and legal issues.  This vision will be achieved by a company and not by a government agency, which is a departure from previous huge society-changing projects like this.

So, why should we trust Google?  Well, they’re smart and they do really good stuff.  They have hired the smartest computer science minds, show no fear, and leap over obstacles that usually stop others.  They won’t do this alone though…the Open Content Alliance will be with them all the way.  [Yay OCA!]

Roy took over from Rich and started by saying that what he really believes is that more access and easier access is better, and that there’s a lot of room for many players in this space and that is a good thing.  He commends Google on its digitization efforts,  buuuuuut…is Google the devil?  Or merely evil? 

Scary Monster #1: A Copyright Cataclysm
Libraries have long enjoyed “Fair Use” protections.  Google’s attempting to shield their efforts under that same umbrella, and Roy fears that this may destroy Fair Use exceptions for us all. 

Scary Monster #2: Closed Access to Open Material
Roy believes Google will fix this problem soon, but right now it’s still the case.  For example, there are many copies of The Call of the Wild open and accessible to all in the public domain.  But if you go to Google Print, you wouldn’t know that.  All that shows up are the closed-off access versions that Google Print has scanned.  They give you links to buy the book, but no links to libraries or any public domain copies.

Scary Monster #3: Blind Wholesale Digitization
Large research collections are often not weeded by policy.  As research libraries are judged by the number of holdings, so they keep all kinds of stuff that most libraries would weed.  He stats that copyright will restrict access to the recent material, which means that the old out of date stuff will be that’s accessible…not the current accurate information.  The Open Content Alliance is focusing more on collections instead.

Scary Monster #4: Ads
Since almost all of Google’s profit comes from ads, how long will it be before we see ads for antidepressants next to the record for Hamlet?

Scary Monster #5: Secrecy
Most agreements between Google and libraries have been kept secret.  The libraries themselves could not talk to each other about the deals they had made with Google.  The U of Michigan revealed theirs after a Freedom of Information Act request for it was made….months after the project was announced.  Rumors indicate that the U of Michigan has the best agreement from the library’s perspective, and now other libraries are clamoring to get similar favorable contract terms. 

Scary Monster #6: Longevity
What do Google, Enron, and WorldCom all have in common?  They are (or were) publicly traded companies motivated by profit, and two of them are now gone. Size does not mean that you are immune to market forces.  What does Google have in common with libraries?  Only that we’re both on planet Earth.  Which one would you trust with your intellectual heritage?

Adam Smith, product manager for Google Print and Google Library, then came up to the podium to speak.  He says that Google always welcomes comments and criticisms, and that helps them make their products and services better.  They like to launch things quickly and without warning so that they can see how people use them and then use that data to make them better (apparently that’s why there are a zillion and a half Google beta projects).  Adam stated that there are difficult issues to deal with, like copyright.  As ambitious as the Google Print project may seem, it’s only the tip of the iceberg.  They are happy to see other similar projects, like the OCA.

Questions from the audience:
• Someone asked Adam to tell us a little about the scanning technology (specifically the rumored scanning robot) they’re using.  Adam replied “it’s rumors.”  He stated that they have two different technologies…one for the publishers (a destructive scanning technology) and one for the libraries (a proprietary non-destructive scanning technology about which he can’t disclose anything). 
• Someone brought up privacy issues as a Scary Monster that wasn’t brought up.  If people download books, will cookies be left on the machine?  Adam responded that Google Print, as any Google product, is beholden to the Google Privacy Policy.  All the information is there about what they will and won’t do with the information they gather.
• Someone asked if it was true that one of the libraries requested that they only had manual page turning for the scanning process.  Adam responded that he could not comment.  Rich speculated that the cost would go up so incredibly that this seems infeasible.  Stephen Abram then commented that the Oxford Library had stated at a conference that that was how it was being done there.
• Someone asked if Google could/should have done something differently in implementing the Google Print for libraries program.  The audience member suggested that putting up samples of what the records would look like, FAQs, etc., would have helped the program be more successful from the start.  Adam responded that Google had spent a lot of time on the website detailing what they were doing when they launched it.  Adam feels that the press picked up the story and sensationalized it quite a bit, to the detriment of accuracy.  Roy commented on the copyright issue…that the publishers were taking the copyright law very literally, and that Google is not…that they have one copy and index it, but don’t distribute that copy (just very brief snippets of it).  Roy thinks that the publishers will win on this issue.  Rich commented that he didn’t see how scanning in copies of books but not sharing them with people breaks copyright law, and that he’s happy that Google is engaging copyright giants like Disney on this issue.
• Someone commented that digitizing the information is really important, but accessing that information is even more important.  The audience member felt that the 10-20 documents listed per results page is really limiting, and is wondering if Google or anyone else is looking at a new way to display search results, perhaps graphically to display more results per page.  Adam responded that right now, they’re focusing on getting more books into the index.  There are teams of people at Google though who are constantly working on results displays.  Rich rephrased the issue as “does page rank work for book rank” and the answer is “no.” 
• Someone said that he’s worried about the notion that Google Library might become a way to discover books that you can’t get, or a way to discover books that you can get but which are outdated.   Is Google working on a way toward making access to the books easier (e.g. finding access to a library nearby) through OCLC.  The audience member then responded that many libraries aren’t in OCLC due to cost issues and won’t be in the future for the same reasons.  Adam also mentioned that they’re working on noting access to library’s e-resources as well.  Rich then commented that one way to think about it is that Google is building the world’s largest Carnegie Library, and then complaining that they’re not building a bus system to get to the Library.  Let’s praise them for what they are doing.  [Right on.  I think it is interesting though that our actual library patrons do complain that we haven’t provided a bus system that will take them to every one of our branches.  Good analogy.]
• Someone asked who decides what snippets get posted for the books.  Adam responded that that is still in flux, but that it does depend on what you search for.
• Liz Lawley commented that Microsoft Research collaborates with college researches about search technologies (like little customizable sliders) etc., and that Google is extremely secretive about its innovations.  How do they reconcile this secretiveness with their claim to be doing these things for the good of humanity?  Adam responded that he’s not the right person to answer that question. 

Technorati tags: ,

“Internet Librarian: Google: Catalyst for Digitization? Or Library Destruction?”

  1. Kevin Marsh Says:

    “Someone said that he’s worried about the notion that Google Library might become a way to discover books that you can’t get, or a way to discover books that you can get but which are outdated.”

    That was me: Kevin Marsh, Network Services Librarian, Texas State Library and Archives Commission. I think Rich’s rebuttal by way of analogy completely missed the mark. There is no Carnegie library that only makes out-of-copyright books available to patrons. My point is not that they didn’t build a bus service; my point is that they are building a disfunctional library.

    Without considering and planning for Retrieval, Discovery is at best of limited value and at worst an active dis-service to the users. Google is building a very attractive and convenient service that will draw in many users, and then only provide those users with out-of-date information or options to purchase the desired in-copyright items. Google can and should do better. Links to WorldCat data are a good start but far from a complete answer. A complete answer would need to be aware of the location of the user and the holdings of nearby libraries (whether or not they are included in OCLC).

    Note: This is a personal opinion and does not reflect any official position of the Texas State Library and Archives Commission.

  2. Sarah Houghton (LiB) Says:

    Thanks Kevin for identifying yourself. I agree that the library is dysfunctional. Typically, libraries weed our materials (particularly science, technology, medicine, etc.) to get rid of outdated materials so that we can be sure we’re not giving patrons bad or incorrect information. The Google Project doesn’t seem to be taking that into consideration….and instead, ofut of copyright and outdated materials will reign. Now, in areas like literature, this doesn’t matter a whole lot, but there are subject areas where it can be a huge problem. I also think, though, that part of Rich’s point was that Google is doing something…and something’s better than nothing. I also think his comment was directed toward the question about ease of access through OCLC, not the outdatedness of the materials. My guess is that both of you would agree on that point.

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Google-brary: The Status Quo of Tomorrow’s MEGALIBRARY
Barbara Quint, Searcher Magazine
Stephen Abram, SirsiDynix
Roy Tennant, California Digital Library
Mark Sandler, University of Michigan
Rich Wiggins, Michigan State University
Adam Smith, Google Print
Steve Arnold, Arnold Information Technology

WARNING: This is an extremely long post.  I typed as much of what the speakers said as I could with my tired little fingers.  Read on.

The room was standing-room only…late on Tuesday night.  It was amazing to see how enthusiastic and awake people were after a full day of presentations.   

Stephen Abram (SirsiDynix) started by asking “How many people are here to rumble?”  The crowd responded with a resounding “whoo-hoo!”  Stephen was an amazingly dynamic moderator.

Adam Smith (Google Print) talked about the Google Print program, including the three different user experiences: the publisher program, the library/public domain program, and the library/in copyright program.  Adam started by saying that he’s here to dispel much of the misinformation that is being bandied about in the press (formal and informal) about Google Print.  What’s indexed?  Full text…period.  Google Print’s goal is to index digitized versions of all of the books ever printed.  The publisher program is all books for which Google has contracts with the publishers (allowing users to see up to 20% of the book for free).  99.9% of the books in the Google Print index today are from this publisher program.  Public domain books from libraries—the goal here is to make everything available, as quickly as possible.  Adam called the in-copyright library books the “demon child” of Google Print.

The other panelists were asked to forecast what libraries and technology will be like in 2020.

Rich says that by that point Google will have forced us in the library world to “think big.”  He believes that by that point we will move beyond the technical and political hurdles on the way to Google Print.  Google will have attacked this problem and solved it.

Roy says that in 2020 he sees Blog People (whoo hah!).  Roy then (sort of) joked that in 2020 after mismanagement, corruption, and embezzlement, Google will file for bankruptcy.  And of course, be exonerated.  He says that by that point we will indeed know just how hard digitization is.  MARC will have died.  Furthermore, he has said that in 2020 he will have retired.

Mark says that in 2020 the Internet Librarian Conference will simply be called the Librarian Conference, and that ALA will be hosting the American Print Librarian Conference.  Smart publishers will still exist.  Google may or may not be here.  There will be small libraries with 50 million volumes available to their readers… (Barabara came in on speakerphone and said “yes!”).  He sees a world where, regardless of how Google Print and other projects move forward, by 2020 the world’s information will be available from everywhere to everyone.

Barbara says that in 2006, Google will launch Google Press, which will be a way to pry authors from the Author Guild by looking for loopholes in their contracts.  Google then will promise those authors that not only will they digitize their books, but all royalty monies will go directly to the authors.  She also says that the Open Content Alliance will be much like Google Print…that Google Press will be re-named the Google Full Court Press.  Google will have lined up a series of print-on-demand houses, a community of editors, and publication will be direct to the web…if you sign up with Google Press.  Furthermore, all libraries in the country receive one free copy of the printed book, and one free access to the online copy.  Authors are divided among those who go with print publishers and those who go with Google Press.

Steve says that in 2020, he will be rather old.  And this Google Print revolution will have been the second he has lived through.  The first was the revolution of the commercial database industry.  Steve says he’s very interested in Google Base, which will allow them to be more involved in science and technology resources industry.  Steve also noted that Google is the new Bell Labs, and we can see the kind of innovation that comes from smart people learning and playing.  It will give Google Print the opportunity to re-invent the publishing industry.

Stephen then briefly explained Google Base, for which there was a quasi announcement today, a new product that will host any piece of content (any website, database, etc.) for free. 

Adam says that in 2020, digitization will no longer be discussed.  Everything will be digitized automatically and be full-text searchable.  At that point they can turn their attentions to researching and space exploration :)  In 2020, everyone is an author and a publisher (we’re seeing the beginning of this today).  This will move beyond text into multimedia and peer-to-peer communication.

Stephen then asked the panelists, from 2020 looking back, what is the role for the librarian, for the information professional?

Mark says that while there is a wonderful opportunity for libraries, there will be an incredible amount of redundancy if we maintain the “local model” (delivery of goods at a local level).  He believes that we will lose a lot of small public and academic libraries in this process.  The notion of local collection building will be called into question.  Libraries will have to re-examine their missions, and begin to “pamper” the users…to be active places where users come to in order to receive expert help.  We’re going to have to give up what we currently do and cherish and develop new strategies to connect users to content.

Barbara says that in 2020, authors will be heavy into promoting their own materials and will be more connected with their readers—not with librarians or commercial vendors.  The prices of print books keep dropping.  The number of open access books keep increasing.  What librarians have to do is keep discriminating between good material and bad material (and there will be a lot of bad material in this future self-publishing world).  Librarians will become pro-people censors…cutting out all the low-quality junk material.  Every librarian at this point works for two constituencies, the one that directly pays their salary…and the world.  She said “Zipcodes do not determine the power of librarians.”  (Note: I really like that idea, though I don’t quite know how I could sell that to our Board).  Barbara thinks that we’ll start to see ALA-approved, MLA-approved icons that mark materials as having our seals of approval.

Roy says that digital will not make print go away.  What we’ve found is that when we put digital books up online, print sales increase.  Furthermore, libraries have never been just about stuff.  We’re about service.  Those services will remain in the future.  He says he’s willing and energized to help invent whatever new thing we provide.

Rich says that we’ll be using different devices to interact with digital content.  Displays will get better, the amount of storage per ounce will get better.  When we can think of something the size and weight of a piece of paper with amazing storage capacities, the whole equation of this discussion changes.  Obscure books will have greater visibility (again with the long tail theory).

Stephen says that everyone in this room needs to wake up.  ALA, SLA, etc. need to get much more engaged with the role of the libraries as an institution.  Otherwise, we’ll have a repeat of what happened in Salinas with library closures.  Information will not be limited to a particular medium.  He believes we will see a continuous “pushing-down” of library functions…and at the top we’ll have large major institutions of libraries (like metropolitan philharmonic orchestras) overseeing operations and services.  He thinks we’ll have a much more militant professional group, and a much different way of teaching information literacy, and a different view of how to bring these artifacts to the constituency. 

Adam says that we’ll see a clearer distinction between misinformation and information.  Editorial will become more important.  How people communicate research and philosophy will be irrelevant of format.  How do we communicate to others what is good will become the focus of our world (del.icio.us anyone?).  The time we’ll need to invest in getting material to our desktops will decrease dramatically.  The social network effect will allow the “truly good” to rise to the top.  He says that having things in a digital environment makes it extremely easy to build digital collections in a way that hasn’t been possible to date.  The role of censor changes in the future and becomes more powerful.

Questions from the Audience:
• Someone observed that everyone on the panel has some vested interest in Google and did we invite authors.  She said that Google postures as distributing this information, and that realistically medical information will be funded by public institutions.  She says that Gates is funding research, and why isn’t Google doing the same thing?  The responses were that yes, many of the panelists are authors and none except for Adam have a real vested interest in Google.  Barbara noted that libraries spend more on their periodicals than on their books, and why are libraries paying so much for journals and other periodicals when the authors aren’t making any money for those?  Rich responded that he authored an early Internet book for McGraw-Hill which had a limited audience and that he would very much like for Google Print to index that book.  Adam said that Google Scholar is indexing anything they’re allowed to, and linking to libraries directly.  He said that Google is taking this project very seriously and enjoying partnering with libraries.
• Someone commented that perhaps we need to not only add information to our stores, but evaluate that information as well.  Barbara responded that this is indeed true, and that in the future information professionals can help users evaluate what materials they spend their time on.  Rich (I think) responded that it costs us millions to house our collections and is an albatross around libraries’ necks.  Less and less of our resources will be allocated to maintaining space, and more resources allocated to maintaining quality collections with less institution-by-institution redundancy.
• Someone asked how, by 2020, we will have solved the digital preservation problem?  Adam says that with wonderful partners like Mark with experience in digital preservation backgrounds, Google will have helped to solve the problem.  The exact “how” was not discussed.  His immediate concern is to provide access
• One person asked about the state of digital rights management in 2020.  Mark responded “it’s hard to say.”
• Another audience member discussed the original agreement between Google Print and U of M, saying that it was extremely thoughtful and well-explored.   She then asked about the grey area between out of copyright and in copyright works.  Barbara responded that a Google Full Court Press would solve that problem.
• Liz Lawley commented that right now she has multiple library cards so she can access varieties of materials through their different collections.  She then expressed serious concerns about Google being the single source of information and the risks involved in having one-collection, one-selection-policy.  Steve responded that he doesn’t believe it will truly be a single-source, but that the rule of three will prevail and that three companies will be providing access to digital materials.  Google will certainly be one, and most likely Yahoo and MSN will be the other two.  Stephen expressed a concern about how search engine optimization limits what shows up on the first few pages of results, and the limitations of what this will provide.  Barbara thinks that Amazon will be the third, not MSN.  Roy then discussed the Open Content Alliance: an alliance/consortium of libraries.  Adam responded that there are a lot of eyes on the company, and that they take the idea of what is in the index very seriously.
• Someone asked about this bright digital future including addressing equal access to quality education and our energy networks (power).  Who will be responsible in 2020 for maintaining infrastructure and access?  Rich said that Google is truly adding something to our society.  The digital divide is a serious problem, but the content needs to be there for those on the negative side of the digital divide to access…and that’s what Google is doing right now…providing the content.  Google has bid to provide wi-fi in San Francisco for free to everyone, so they are moving into the infrastructure business as well.  Roy said that public libraries will continue to play a key role in providing public access to everyone.  He also notes that Microsoft has given a lot of money to public libraries for providing connectivity to the public as a whole.

At this point they made the big announcement (see also my previous post): Paula says that the Open Content Alliance had an official inaugural event in San Francisco at the Presidio.  The Open Content Alliance is Yahoo, HP, University of California, and others, and limiting the content to out of copyright works.  Tonight Microsoft announced that they have joined the Open Content Alliance, and that they’re funding the digitization of 150,000 books this year.    Microsoft’s service will be called MSN Book Search.

More question from the audience.
• Someone asked: “What will happen to printed books?”  They’ll all be on the shelves, they won’t be dumped once they’ve been scanned or digitized.  Adam responded that print isn’t going away—that the vast majority of books Google Print is scanning are in-copyright books for which they can’t scan and present the full content.  As such, the printed materials will continue to be the only way to access the full content of the item.

Closing remarks from the panel:
• Rich: Are the schools thinking of guaranteeing that they’ll keep at least one copy of the printed material?  Furthermore, he believes that the prospects are wonderful for the democratization of information.
• Steve: The collections will now be split—frequently books will be circulated with access-on-demand and the less-frequently books will have a one-day delay for access (due to cost and space constraints).  He doesn’t believe print will go away—it’s a “non-starter.”
• Roy: Roy says he bangs on Google about librarians, and on librarians about Google.  We both “have a long way to go to get it.”
• Mark: After some disparaging comments about Elsevier (rock on), Mark continued on to say that corporate, legal, academic, and public libraries that don’t take the time to perceive the changes will indeed disappear.  We can create communities of content for communities of users, specialized access tools, upgrades of content, and more…let’s do it!  There are lots of opportunities for libraries to privilege content that deserves to be privileged.
• Barbara: A tipping point was reached a while ago by the people who use the technology, not the people who create it.  When everyone in the world is online, that will be the true tipping point.  Information professionals feed of off human ignorance, and we will never starve.  (hoo-hah, says the Sarah).
• Adam: Google believes that information access is a basic right, and we need to work together to realize that right.

Technorati tags: ,

“Internet Librarian: Google-brary: The Status Quo of Tomorrow’s MEGALIBRARY”

  1. Library Bitch Says:

    Posts are never too long when they are this fascinating. Thanks for putting that up there. The possibilities are amazing – so long as (can you believe I’m about to say this?) we don’t get so enamored with the technology that we forget our own basic principles behind it. The medium doesn’t matter as much as many believe it does, but the standards of collection development and maintenance, as well as the availability to the public of appropriate “reading technology” for the medium of content (be they disc, jump, card, or whatever) will be of highest importance. No sense of “books” being available on say a minidisc that only a few are able to use, right? Sort of like when CDROMs were big in libraries – that was before the average joe had them in his computer, or was using a laptop, and it never really took off as a viable medium because of that. We use them, sure, but to what extent compared to the hype that was when they were introduced to libraries? The same danger faces us here.

    Whew – long comment. Anyway, so long as we can keep treading the waters, and keep our librarian heads above the waves of technology to be able to take the best approach for our respective user bases, libraries will be fine no matter which direction technology goes.

    Thanks for the insights,
    G

  2. Sarah Houghton (LiB) Says:

    I agree, and I want to emphasize one thing that you wrote: that we need to “take the best approach for our respective user bases.” I think that’s one thing we often forget as techies, and think everything is new and cool and automatically good for everyone. But if your users are among the information poor, perhaps you had best start with the basics (e-mail, Internet browsing) before you move on to coolio things like blogs and podcasting. A good reality check box for us to take note of with any new project.

  3. Library Bitch Says:

    Right on. Baby steps are the key. You jump in the river without being able to swim, the current will pull you under and drag you away. But if you ease in, a step at a time, and get yourself used to the current, slowly you will begin to adapt to the new conditions.

    That’s your cheesey metaphor of the day. :-)

    Peace,
    G

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Blogs & Wikis Face Off
Steven M. Cohen, Pub Sub and Librarystuff.net
Jenny Levine, The Shifted Librarian

Steven mentioned two great wikis: Library Success: A Best Practices Wiki and the ALA Chicago 2005 Wiki.  TikiWiki will build a web-based wiki for you for free. 

So…conference coverage: do blogs or wikis work better?

The advantages of blogs are that it’s easy to post, posts display in chronological order, you get an automatic RSS feed, and it allows for comments.  One of the disadvantages is that only blog authors can edit the content of a post.  Why might it not work?  Bloggers will be the likely blogging candidates, and they’re probably blogging on their own blogs. 

The advantages of wikis are that it’s easy to post and the content is editable by anyone.  The disadvantages are that the posts are not chronological (which may turn into an advantage after the conference), no automatic RSS feed, and no comments.  However, non-bloggers who are web-savvy might be more likely to contribute to a wiki than a blog.

For this conference, Jenny pointed to the Technorati tags (IL2005 and IL05) and the Flickr stream (IL2005, IL05) as the biggest success stories for conference coverage.

There was a lot of audience participation in this session, sharing of resources, tips and tricks, new tools, and more. 

Technorati tags: ,

“Internet Librarian: Blogs & Wikis Face Off”

  1. Science Library Pad Says:

    blog and wikis for conference coverage

    What is possible with conference coverage seems to change with every year as more and more people become comfortable with web tools. There seem to be just a few basic elements that succeed: People like to blog in their own

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Blogging at the University
Susan Herzog

Susan has a great list on her website: Blogbib (check this out if you haven’t yet!).  It provides (among other things) annotated entries for library/librarian blogs.

Susan started with some basic definitions of what a blog is and its role in today’s information world.  She does encourage that bloggers learn at least a little of HTML to deal with the blogging software when it doesn’t do exactly what you think it should. 

She then moved in to how blogs are good for universities and academic libraries, in terms of PR, outreach, and personal and professional advancement.  She also encourages the use of blogs for internal library use as well.  The first steps toward blogging at your academic library would be to find existing academic library blogs and read them.  She also gave a number of pointers on potential content for both internal and external blogs.  Before creating your blog, you also need to consider the purpose of the blog and its intended audience (I feel like I’ve heard this a dozen times during the conference, but it’s just as true each time I hear it). 

She also encourages libraries to develop a blogging policy to ensure that your blog reflects well on your library and offers your bloggers guidance about what is or isn’t appropriate. One good piece of information she gave us was to test our blogs in different browsers.  I would add that you should remember that just because you’re using some spiffy blogging software, it doesn’t mean that they have cross-browser compliance.  You’re responsible for the accessibility and compatibility of your blog.  She concluded by showing us several live examples of academic library blogs.

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Marketing the Weblog
Jill S. Stover, Virginia Commonwealth University

Jill tells us that marketing is not what we think it is: things like aggressive sales and spam are examples of bad marketing, not the kind of marketing she promotes.

Marketing has five parts: target market, product, price, place, and promotion.  What you know about your target market drives what you do in the other four (the 4 Ps). 

Questions to ask before you start: What do you have to offer?  Use SWOT (a way to assess planning, etc.).  Who are your readers and what do they need?  Who’s in the blogosphere?  Are they actually your audience? 

She also encourages breaking out your audience into even smaller audiences.  Manageable segments are distinct from others, homogenous within the group, “profitable,” measurable, and reachable.  You don’t have to just segment based on age, try segmenting based on interests (mystery lovers, gamers, photographers). 

Product: You need good content, in terms of subject matter, updates, uniqueness, and appropriate voice/tone.  Resources to help with the development of your product (your library’s blog): “Lose the Jargon, Voice Your Brand” in Business Week and Writing for the Web by Jakob Nielsen.  The design needs to reflect the content and audience.  Design resources: Webmonkey and ColorBlender.

Price: It’s free!  Whoo hah!

Place: Have an RSS feed.  Where do you place a link to your blog?  It depends on who your target audience is and what kind of other resources you’re providing online.  Explain the rules for syndicating your content to your readers.  For more ideas, see the RSS4Lib Blog

Promotion: Decisions depend on your target audience
Don’t reinvent the wheel
Involve your readers and your staff

She also encouraged us to think about what success for our blog means to us.  Lots of posts?  Lots of comments?  Lots of visits?  Lots of trackbacks?  She encourages us to review the other blogs that are already out there to see what our fellow-librarians are doing.  Finally, she tells us that we should be reading blogs by librarians and for librarians. 

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Library Blogs—Ethics & Guidelines
Karen G. Schneider, Librarians’ Index to the Internet
Presentation available on Free Range Librarian

I always love hearing Karen talk, and today was no exception. 

She started by asking us why ethics matter?  Through your blog, you represent yourself and everything you’re connected with, including librarianship—be smart.  For most readers, you are the last stop between the reader and the truth—be accurate.  Information has a long half-life—be careful.  The rule of ethics are also the rules of self-preservation—again, be careful.  The blogosphere can be cruel—watch your back.  The biblioblogosphere can be crueler—again, watch your back. She also says that on a macro level, the harder we work to make the world a moral place, the better it is for everyone.  She cited Ranganathan’s first rule: “Books are for use.”  We are an ethical profession, and defined by our concern for other people, for our users.

Rebecca Blood and Cyberjournalist.net both have codes of ethics for bloggers. 

Karen spoke about the necessity of transparency in blogging—knowing where the blogger’s coming from…any biases or goals (s)he may have.  To be transparent to your readers have an About page, full disclosure of conflicts, biases, or vested interests, honesty about who you are and what drives your writing.  Transparency can also be strategic, in the case of Groklaw (the blog of a paralegal drawing connections between different technology companies, who was hounded by journalists and pre-empted them by posting all of her information on her blog).  Transparency also minimizes “fisking” (critiquing a piece of writing by highlighting all the errors). 

She also discussed the need for citations—to allow people to refer to the full versions of the things you are discussing.  Karen cited Michael Gorman’s “blog people” article as an example of making hugely inflammatory claims without citing any of your sources.  Accuracy is also essential (but of course!).  Check your facts a hundred times over—you’re responsible for what’s on your blog, absolutely.  Examine the credibility of your sources—this is what we do for a living, let’s also do it on our blogs.

She also encouraged us to always be fair, to give people equal chances for expression and not letting partiality stand in the way of what’s right.  One piece of this is to always let a source know that you’d like to use their words on your blog (on the record vs. off the record).  Opinion is okay, but don’t present it as fact.  And if you are amazingly claiming to be objective, then you definitely have to present all sides of the issue.  And let your readers comment…period.

Finally, Karen advised us to admit our mistakes.  Mistakes can be errors of judgment or fact…you need to admit to both. 

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

What’s Hot & New in RSS, Blogs, & Wikis
Steven M. Cohen, Pub Sub and Librarystuff.net

Steven presented to a jam-packed audience—I’d estimate about 400 people were in the room.  Wowzers!

Steven started by showing us the conference wiki & blog and encouraged audience members to participate in both of them.

Steven’s presentation is available.

Trends that Steven is seeing are that everything is in beta (seems a bit overcautious and rather annoying when sites are in beta for over a year).  One example is Google News, which has been in Beta for 2 ½ years (though this is probably for legal reasons, since they can’t advertise on the site and make money because they’d be sued by the content providers they’re using).  He also mentioned the problem of splogs, blogs that are machine-generated and contain spam, spam, spam, spam!  He has also seen that more people are using web-based aggregators instead of desktop aggregators for RSS. 

Over the past year, Steven has seen Google and Yahoo creating blog searches and catching up with RSS, Wikipedia getting a lot of attention, people listing alternative methods of communication (blogs, del.icio.us accounts, IM) on their business cards, small blog companies being purchased by larger companies, and .  He also sees the return of customized portals (NetVibes.com, My Yahoo, Google Personalized, etc.) with personalized content on your very own homepage (news, RSS feeds, etc.).  Also the rise of ratings (Reddit.com, Digg.com, Memeorandum.com, Oishii.com)—let’s incorporate patron ratings into our library periodical databases and online catalog. 

He also sees a lot of interaction and collaboration, through things like Library Thing, Reader2, Livemarks).  Library Thing is very cool, and I’ll put in a personal plug for it.  I was amazed by how many people in the room hadn’t ever heard of it (it’s been all over the blogosphere for 6 months now).  Reality check for me, I guess.  He discussed Life Management tools like 43 Things (people helping other people get things done) and Planzo (planning software).  He discussed metasearch tools like Gada.be and Kebberfegg.  Other new applications to keep an eye on: Meebo, Feedshake, Feedmarker, etc. 

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Keynote—Social Computing & the Info Pro
Elizabeth Lane Lawley, Rochester Institute of Technology Lab for Social Computing and Microsoft Corp.

Lawley spoke a bit about the long tail theory (as did Lee Rainie yesterday), stating as he did that the bulk of content is in the tail, not the popular blogs that a zillion people are already reading.  She noted that librarians have traditionally been good at being aware of the long tail resources…things that very few people know about, but are valuable. 

She also stressed that we’re using computers to augment our social networks, not to replace them.  Human-to-human interaction is not going away, despite many people’s fears.  Most of the computing tools today that we use (including a lot that are made by Microsoft) are frustrating to use and frankly “suck.”  She noted that after first meeting the people at Microsoft, she couldn’t understand how such smart nice people created such crappy software.  She’s found that it’s due to large organization mechanics (which happen in any organization, including libraries).  Great innovative ideas get mutilated on their way to the end product by committee work, risk management, and cost-benefit analysis. 

She says that what we need is software that makes really hard tasks really easy, and therefore empowers the user.  What makes searching tools more usable are social networking components.  She compared a search for “clay” on Google (Clay Aiken, polymer clay, Jars of Clay) to one on Yahoo (Clay Shirky for all hits on the first page).  Why?  Because she’s customized her Yahoo search page (My Web 2.0) so it knows what kind of information she typically wants to find.  Smart web searching is good.

She also spoke about social bookmarking sites like del.icio.us, and their role in social networking—communicating items of personal interest to others in an easy and efficient way.  It also connects you to people who are interested in the things you are also interested in…to experts in your fields of interest. 

She stressed that not all social networking is about friends…it’s about people with similar interests who you will most likely never meet.  It’s an informational network, not necessarily a buddy network.  She showed something that Jenny Levine showed yesterday: the La Grange Park Library has a del.icio.us account with all their reference bookmarks.  This is not only shared with the reference staff, but with the public as well.

She also stressed the importance of human filters (such as filtering your search results through the bookmark sites of trusted people or organizations).   

Beyond that, she spoke about tagging and that even if we as librarians don’t think it’s a good idea, it’s here and not going to go away.  She also spoke counter to current thinking by saying that tagging does not show you the long tail, but it rather focuses on a rather narrow vocabulary.  Finally, she said that for students doing website design and trying to decide what to call categories, she recommends that they go to del.icio.us to see what people are calling things.

She showed us the ESP game.  The game flashes an image and you have to type in what you think the best keyword will be to describe the image (with some general taboo words listed that others have chosen).  If you and your anonymous partner on the other end choose the same word, you move on to the next image.  She mentioned that many cultural biases are revealed in this game—girl vs. woman, racial slurs, etc.  Do we want these biases coming through in our tagging? 

She cautions us against assuming that because we don’t like a particular technology or find it useful that it isn’t useful or good for anyone.  Attention is a form of capital.  Speakers and resources and organizations can’t demand attention from you without giving you something in return.  The lecture “pass-the-time” activities of passing notes and reading the newspaper has morphed into computer activities.  Students are listening when there’s something they need to hear and not listening when it’s something they don’t need to hear.  Grades and participation don’t go down…students are just giving us their “continuous partial attention.”

She also recommends that we read the “Meet the Lifehackers” article in the New York Times Magazine (October 16th). 

[for-pay access only--that is teh suck]

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Hardware Solutions
Aaron Schmidt, Thomas Ford Memorial Library
Bernadine Goldman, Los Alamos County Public Library

Bernadine Goldman spoke about some of the problems that her public library was facing with their public computers and potential solutions.  Problems: memory stolen out of the PC, fights over whose turn on the computer it is, reams of printouts left behind, and staff having to deal with all of this.  How do you research a solution?  Look for vendor information at conferences, read the literature, check out WebJunction and listservs (like LITA-L), view online demos, visit vendor websites, and when you’re at other libraries ask the staff there about what they use (you’ll usually get an honest review).  They chose thin clients as a solution.  They’re not vulnerable to tampering, can be updated from one central server, take up less physical space, use familiar software applications, and were in line with the library’s existing technology plan.  Goldman also encouraged attendees to be cautious when implementing vendor solutions: it’s never as simple or as quick as they claim.  Be prepared for that.

Aaron titled his presentation “Smart Computing at Your Library” or “Geek to Live, Don’t Live to Geek.”  Aaron believes that we can provide our patrons with computers that are easy to use and also meet their needs.  Aaron believes that we need to allow patrons to save files to the desktop, but to then delete those files after the patron is done to protect their privacy.  We should also provide access to USB ports for flash drive access.  We also need to make instant messaging and CD-burning available as well.  Multimedia capabilities, image-viewing software, and game-playing capabilities are also important to offer.  Aaron also suggests that users be able to install programs on our computers (which elicited some oohs and aahs from the audience).  Aaron also suggests that we keep tabs on our machines with a maintenance checklist (checking monitor settings, mouse condition, updates for Windows, Adobe, Shockwave, Flash, and any other browser media plugins you’re using, and cleaning your keyboards).  Aaron also recommends Firefox, CCleaner, Zone Alarm, and Startup Inspector.  Aaron told us that his most important point was for us to ghost computers, to create a back-up image that can be re-instated upon each re-boot.  Any money invested in this technique will be recouped in staff time costs and then some.

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Social Software and Sites for Public Libraries
Jenny Levine, The Shifted Librarian
Jessamyn West, http://librarian.net/

The wonderfully talented team of Jenny and Jessamyn presented to a packed room about social software: one of the key buzzwords in the technology world. 

Jessamyn’s presentation was titled “Flickr, Tagging, and the F-Word.”  Flickr is a photo-sharing website.  Flickr was recently bought by Yahoo!.  You can take photos from your digital camera, from iPhoto, from your phone, from anywhere and quickly post them.  You can both search and browse Flickr.  One of the great things about Flickr is that it promotes sharing.  Flickr allows you to license your photos with a variety of licenses—use photo, modify photo, redistribute photo, etc.  Flickr also allows commenting and notes on each photo, which is another method for sharing information and opinions.  What makes Flickr different from other photo websites?  TAGGING!  Tagging on Flickr is just like metadata….but extremely free-formed and flexible.  One person may tag a photo as libraries, while someone else may then tag it as architecture, buildings, public, exterior, USA, etc.  Both the poster and the viewers can tag items.  The tags act like subject headings in a catalog—they’re hyperlinked to a complete list of items with that tag.  This flexible method of tagging allows for communities to evolve in Flickr, people-driven classification (fancy that!) and a much more interactive and useful online environment.  Flickr is an example of a folksonomy.  Folksonomy is the F-word.  It’s created “by folks.”  People decide what access points are important to them for findability.  It allows people to use different names for things—again, fancy that! 

Jenny talked about del.icio.us.  del.icio.us is another example of a folksonomy: tagged bookmarks.  del.icio.us is also searchable.  Jenny suggests that del.icio.us is a great place to learn about subjects—as people have already decided that these sites are memorable and worth keeping.  This is a great way to access pre-filtered information on topics.  There are a number of hacks, including tagging something “ForX” (X=person’s name), and then set up an RSS feed for that category to send sites to someone else.  The LaGrange Park Library has created a set of reference bookmarks that are accessible by any of their staff and (very important) any of their users.  What a better way to keep reference bookmarks (as opposed to the oh-so-evil IE bookmark files).  You can also use this to display these bookmarks as subject guides on your website (through HTML display of your RSS feed for that category).  The Thomas Ford Memorial Library is doing this now (rock on Aaron).  43 Things is a wonderful new project through which people list 43 things people want to do in their lifetimes, and helps people with similar goals connect.  Jenny also showed us Books We Like: a site where people tag books that they’ve enjoyed.  Jenny feels that tagging has a role in our library catalogs in the future, taking advantage of ours users to contribute tags to help us organize our materials and make them more findable.

Technorati tags: ,

“Internet Librarian: Social Software and Sites for Public Libraries”

  1. The Shifted Librarian Says:

    More Taggy Goodness

    During the Jessamyn and Jenny show at the Internet Librarian conference, I was very glad Jessamyn emphasized that our interest in tagging and folksonomiesdoes not mean we advocate doingaway with structured classification or se…

  2. Bruce Landon's Weblog for Students Says:

    games

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

People and Technology
David King, Kansas City Public Library
Michael Stephens, St. Joseph County Public Library

David King talked about hiring and keeping techie staff.  The job ad is the applicant’s first introduction to your system—so make it specific and impressive.  Focus on what the job-seeker wants to know re: responsibilities.  Be specific about what you’re looking for (manages SQL and MySQL databases vs. understands database management and structure).  Where do you place your ad?  Is an MLS required?  If so, focus on online placement on library listservs and websites and your website (but of course).  If you don’t require an MLS, use local resources (newspaper), Monster.com, and your website.

If you have a good internal candidate who’s already serious about customer service in the library, who just needs tech training to meet the job requirements, you should seriously consider hiring internally.  Internal candidates are already comfortable in a library environment, which is very different than many technology company environments.

Some sample text for a job ad: “This person should be a quick learner and enjoy technology changes.”

Keeping technology staff (some things to consider): competitive benefits and salary very important, praise and recognition of their efforts, allowing them to experiment, creating personal relationships between them and other staff members, getting them involved in the library overall through taskforces, planning groups, and goal setting initiatives, allowing your tech staff to get the training they need (which is seriously different than the training other staff need, and probably quite a bit more expensive).  Finally, techies like toys…let them play with new gadgets, new services, and new resources.

Michael Stephens offered one of his famous top ten lists (bless you Michael, and your David Letterman-like coolness).  He’s posted his list on Tame the Web.  His focus was how to get staff-buy-in for library technology.  You need to ask yourself “Why are we doing this?”  Answer that question for staff before you do anything else.

#1: Listen to the conversations going on in your organization.  What are people saying about how projects are being completed in the library.  Without buy-in, you’re not going to be able to sell your new initiatives to your users.
#2: Involve staff in planning right away.  The staff on the front lines are best equipped to tell you what the public says or needs or wants.
#3: Tell stories about what’s happening in your library.  Tell staff about what you can do with these new technologies, tell anecdotal stories about what users are doing and finding with these technologies.
#4: Be transparent.  Don’t hide what you’re doing from the public (tell them how you’re spending their money) and don’t hide what you’re doing from staff either (blog for them when you’re at conferences).  Tell them what you did and what you learned.
#5: Report and debrief.  When you come back from conferences or trainings, do a debriefing of what you learned for staff.  What are the 3 most important things you’ve taken away from the conference?
#6: Do your research.  Keep up with what’s new in technology.  No excuse not to.
#7: Manage projects well.  Keep meetings open and dynamic and make it an ongoing conversation.
#8: Offer training for all technology.  Before you roll anything out, let staff play with the technology.
#9: Let them play.  Let staff see how cool and fun the new technology is.  For example, gaming at libraries.  Let them play the games.
#10: Celebrate successes.  Stop and say “Wow, we just did something super-cool.  Good for us.”
Bonus: Get away from the technology when you can.

A member of the audience challenged the speakers by saying that all of this new technology is too much for most librarians to learn.  David and Michael responded that continuous change and learning about technology is part of our profession.  That’s not going to change.  All I gotta say to that is "right-freaking-on."  Roy Tennant said at his LITA keynote earlier in the month that if you’re comfortable, you’re not paying attention.  In our profession today, you have to keep up…you have to be continuously learning.  If you don’t want to do that, or aren’t comfortable with it, you may want to consider switching jobs (my words, not Roy’s).

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Digital Content
Joe Latini and Ken Weil from South Huntington Public Library

We were fortunate to be able to hear from the Director and Assistant Director of the library providing audiobooks on iPod Shuffles to their patrons.

Why provide digital audio books? 
Downloading audiobook content is far cheaper than purchasing the books on CD, cassette, etc.  The savings can be used to buy MP3 players, there’s no need to replace lost or damaged cassettes or CDs, new titles are available much sooner, and there’s no need for shelf space.

There is no audio book vendor that provides both universal access and high demand digital content.  iTunes offers current high-demand content and the library owns the content.  Patrons do not need to own a computer or an MP3 player, or have high speed internet access (since the library is loaning out books already preloaded on players). 

Copyright concerns were also addressed.   If the library only owns 2 copies of a title, that’s all that they will circulate at any one time. 

For purchasing audiobooks from iTunes, the library needs a credit or debit account with iTunes and iTunes software.  The title is purchased and downloaded by the library, and then stored on a server (with a backup copy as well).

So, what do the patrons actually get when they check out an audiobook on an iPod Shuffle?  The package (a swank looking camera case) includes a title card with a barcode, a content card with the iPod barcode, an iPod shuffle, a power adapter/charger, a radio transmitter, an audiocassette adapter, a user guide, and an auxiliary input connector.

The books have a two week circulation period, no ILL, $1 per day overdue fine, circulation is restricted to their residents only, and the user has to sign a waiver form with borrowing terms and conditions.  This $1 fine even applies if the user has brought in his/her iPod and loaded it that way, and fails to come in to the library to have it deleted.

The library also allows users to bring in their own iPods and download books.  If the user chooses that route, however, they’ll lose any content or playlists already on the iPod when the library puts a book on the computer (due to the ever friendly Apple Digital Rights Management).  They’re moving away from this now.

What’s new at South Huntington with their iPod services?  Circulating music on iPods, developing young adult collection of content (audiobooks, music, all selected by young adults), doing an audio tour of art exhibits, and exploring podcasting library programs.

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Web Trends & Innovations
Me, Glenn Peterson, David King, John Blyberg

This program kicked off the first ever Public Libraries track at Internet Librarian, and I’m honored to have been a part of it.  Our four person panel delved into website trends and technologies.

Glenn Peterson from Hennepin County Library talked about using your reference staff to create content for your website, moving toward XML, using Rapid Development Environments (like Dreamweaver) to make web maintenance less techie and less time-consuming, and creating integrated subject guides on library websites (including websites, databases, reading lists, blogs, catalog links, and more).

David King from Kansas City Public Library talked about what some larger public library website are doing: Seattle Public, Phoenix Public, and New York Public.  He also closed the session with some predictions about where library web services are going: more interactivity, pushing content through RSS, more redesigns, and more services through instant and text messaging.

I talked about small libraries, and what they can do with no staff, time, or money, including blogging and RSS, simple linked subject guides and reading lists, quick one-click access to searches in the catalog (Quick Searches), simple online forms, and lightweight virtual reference options (my personal baby).

John Blyberg from Ann Arbor District Library talked about how his library uses Drupal and LAMP (Linux, Apache, MySQL, and PHP) to run their blog-based website.  What a great site with some seriously innovative thinking—a ton of interactivity options for the library’s users and staff.

Technorati tags: ,

“Internet Librarian: Web Trends & Innovations”

  1. Tame The Web: Libraries and Technology Says:

    5 More Factors for Effective Library Web Sites

    Watch Open Source applications closely I didn’t bring this out as much as I should have in my post at ALA TechSource, but other folks did which I appreciate! I am fascinated by what’s happening with Open Source and, ILS…

  2. Tame The Web: Libraries and Technology Says:

    5 More Factors for Effective Library Web Sites

    See this for the first 5 factors! Watch Open Source applications closely I didn’t bring this out as much as I should have in my post at ALA TechSource, but other folks did which I appreciate! I am fascinated by…

  3. Tame The Web: Libraries and Technology Says:

    5 More Factors for Effective Library Web Sites

    See this for the first 5 factors! Watch Open Source applications closely I didn’t bring this out as much as I should have in my post at ALA TechSource, but other folks did which I appreciate! I am fascinated by…

  4. Tame The Web: Libraries and Technology Says:

    5 More Factors for Effective Library Web Sites

    See this for the first 5 factors! Watch Open Source applications closely I didn’t bring this out as much as I should have in my post at ALA TechSource, but other folks did which I appreciate! I am fascinated by…

  5. Tame The Web: Libraries and Technology Says:

    5 More Factors for Effective Library Web Sites

    See this for the first 5 factors! Watch Open Source applications closely I didn’t bring this out as much as I should have in my post at ALA TechSource, but other folks did which I appreciate! I am fascinated by…

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Internet Librarian: Opening Keynote: Shifting Worlds: Internet Librarians at the Forefront
Lee Rainie, Pew Internet and American Life Project
October 24, 2005

With standing room only, Lee Rainie spoke to a room of enthusiastic conference attendees.  Rainie quoted Elizabeth Eisenstein’s thoughts about the effect of the printing press on Europe.  Statements like (paraphrased): “The role of information gatekeepers has changed dramatically, as those previously without voice have found a platform for expression, and who challenge the authority of these gatekeepers.” still apply today.

One of the most interesting statements that Rainie made was “The more commonplace and invisible the technology, the greater the impact is.”  We’ve heard this before, we’ll here it again, and it’s still true.  I don’t care to know that the way I get my news is called an RSS feed, and functions with XML protocols, whether it’s push or pull technology, or anything else about it.  All I care about is that it works and it gets me news quickly and efficiently.  Invisible.  I would expand this to devices as well: I don’t want an eBook reader.  I want a device that I already have (a PDA, a laptop, an iPod) that will also work as an eBook reader.  Invisible.

Rainie shared some statistics with the audience.  68% of adults use the Internet. 87% of teenagers use the Internet. Broadband use is the norm, and more than 2/3 of internet users have broadband available somewhere—work, home, school.  But 1/3 of Americans do not consider themselves Internet users, and 1/5 have never been on the Internet.

Teenagers
12-17 year olds are more connected than ever.  They adore instant messaging.  ¾ use IM, ½ of the use it every day.  45% have cell phones.  Teenagers are re-defining what it means to be present.  Physical proximity and time of day do not matter anymore.  We as libraries need to re-evaluate what it means to be available and present for other people.  These tools allow them to play with their identities through profiles, away messages, and Facebook.  8 of 10 teenagers play online games.  43% of teenagers have purchased something online.  Teenagers are also media creators (stories, photos, artwork) and are sharing them online.  Teenagers are also high users and creators of blogs and their own websites (much higher than adults).  Teenagers also multitask relentlessly. 

Politics
75 million Americans used the Internet last year for some kind of campaign-related purpose.  Using online tools for direct participation in politics (political action, meet-ups, contribution to campaigns) has gone up significantly.  For younger tech-savvy broadband users (under 35) the Internet rivals television as a source for political news.  Internet use for politics is correlated to voting.  If you use the net, you’re more likely to vote.  Do Internet users isolate themselves to only reading and exposing themselves to those who agree with them, and isolating themselves from those who disagree with them?  Pew’s study during the last campaign found that this was not indeed true—that the Internet instead contributed to a wider knowledge of alternative views. 

People’s Use of the Internet in Major Moments
Major moments could be finding a new home, finding health information about a major illness, making a financial decision, etc.  More people are using the Internet at these moments, and saying that the Internet is playing a large role in these decisions.

Closing Remarks
Rainie spoke about the “long tail,” an idea from Chris Anderson from Wired Magazine.  On a power curve, very few things have high use and there are a lot that trail out into low use.  He’s looked at movie sales and rentals, music sales, book sales, etc.  Between 40% and 50% of purchases are from the Long Tail section (little known, rare, non-mainstream items).  This is a huge change—improving the position and visibility of these items is resulting in their increased consumption (through social networks, word of mouth, rating systems like Amazon’s reviews, etc.).  He also spoke about Smartmobs, specifically the WTO protesters in Seattle’s amazing networking through cellphones, texting, etc.  His last remark was regarding “continuous partial attention” (the idea of XXXX).  It’s different than multitasking.  It’s scanning incoming content for the best one…the best path to pursue, the best resource to look at, the most interesting input.  Umm, yes.  I’m pretty sure I do that all the time…well, except when I’m sleeping.

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Pre-Conference Workshop: Web Managers’ Academy
Frank Cervone, Darlene Fichter, Marshall Breeding, Jeff Wisniewski
Sunday, October 23, 2005

This was a day-long session, but here are some of my favorite points from the day.
• A lot of library websites are still trying to replicate the physical library’s structure (e.g. reading lists, reference section, fiction section, circulation desk).  Next generation websites have to abandon this philosophy and integrate materials to answer users’ needs.
• Users often want just enough information to get by, to answer their question, to meet their immediate needs.  And maybe that’s enough. 
• There’s a big movement in libraries from purchasing content to leasing content (and I was thinking, we’ve always “leased” our content to our users—for free of course—but why shouldn’t we also lease that content from the vendors?)
• Purchasing a content management system if you don’t have one would be a good idea.  It will save your tech folks time as people start posting their own content through a WYSIWYG interface, and stop relying on the tech to make all of the changes.
• Database-driven sites and web services are already huge in the commercial world.  Libraries are behind, and it’s time we catch up so we can meet user expectations.
• Designing with web standards can help you in a number of ways, including making re-designs easier and improving the efficiency of your code (I think we all know this, it really is just a matter of finding the time to clean up all the junkily-coded things we already have sitting around).
• Usability testing doesn’t have to be costly or time-consuming.  Even a few users can give you invaluable data.

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.

Preconference Workshop: Implementing Federated Searching and OpenURL-based Linking Initiatives
Frank Cervone and Jeff Wisniewski
Saturday, October 22, 2005

Frank was stuck in LA due to a cancelled flight (too much fog in LA—bah hum).  The gracious Jeff Wisniewski from the University of Pittsburgh filled in for Frank in person, and Frank called in from LAX via speakerphone, sadly with limited effectiveness as he faded in and out a lot.

The session addressed the role of OpenURL in federated searching (actually the clearest explanation of it that I’ve ever heard), how federated searching works, various customization options and features available from different federated searching vendors, what’s next with things like XML and Z39.50, and a nice list of questions and considerations when choosing a federated search product, such as:
• What can’t your software federate?
• What usage statistics are kept?
• How does the product do authentication?
• Can the interface be customized for specific populations in your user group?
• How does the product handle merging and de-duping of results?
• Is relevance ranking possible?
• How is the product licensed?
• Are there extra charges for additional interface development, etc., that isn’t already part of the software?
• What protocols does the system support?
• And many many more

The session also addressed major trouble spots with implementation, such as:
• Categorization of resources
• Customizing the interface
• Rolling it out (internal resistance and publicity/awareness of your users)
• Ongoing maintenance issues (subscription changes, updated holdings, adding local items)

Overall, a good session, and I’m definitely going to have to refer back to the slide notes to get a lot of the things that we skipped over due to time constraints.

Technorati tags: ,

Leave a Reply

LiB's simple ground rules for comments:

  1. No spam, personal attacks, or rude or intolerant comments.
  2. Comments need to actually relate to the blog post topic.