Sujay Darji and Stephen Abram were the speakers for this session.  Sujay Darji started off by discussing his work at SWETS with eBooks.  SWETS is a subscription agent primarily known for aggregating periodicals.  So what do the content suppliers (aka publishers) need from subscription agents like SWETS?  Aggregators started approaching subscription agents wanting them to distribute their content.  A lot of small to medium sized publishers were inexperienced with eBook distribution models and purchasing models.  SWETS tried to close some of those gaps.  Subscription agents needed to decide what terms and conditions they wanted to put into place for eBooks.  What are the headaches librarians experience with eBooks?  It’s difficult to compare pricing between vendors because there are content &  “platform” fees.  It’s hard to find out what eBook titles are available and hard to compare licensing terms.  Digital rights management, dictated by publishers, is inconsistent and horrible.   How many eBook platforms are there out there?  SWETS wanted to focus on acquisition of eBooks.  His approach to eBooks is three-fold – acquire, manage, and access.  Managing millions of cross-publisher eBooks in one platform lets you navigate and control your collection without having to jump between platforms.  With the SWETS model, there is no need to create a separate platform.  Simply integrate your eBooks into your existing publicly viewable discovery systems/ILSs.  The tool is a free tool that is open to all users.  There is no platform fee, which is a relevant research tool that you can use to build your collection.

Stephen Abram’s talk was entitled Frankenbooks (LOVE IT!).   When we’re reading with a little bit of light on a screen, interaction with the screen is encouraged.  If we use a codex model to try to understand what textbooks of the future will look like, we’ll get it wrong.  How do you engage learners, researchers, teachers, curriculum heads, testers, and assessors to agree on reforming their eBook textbooks?  Most eBooks are text that you can read end to end.  Many eBooks, though, you just want to read a specific section.  Stephen showed traditional publishing bingo and electronic publishing bingo cards J  Want!  Why do people like the smell of books?  Smell is the largest memory trigger, and with books they’re remembering all of the things they learned, how they felt, etc.  How would you enhance a book?  What framework would you use?  We cannot take the old format and carry all the compromises forward.  Where do publishers move with all of the new options?  When you look at the physical act of reading, how does the act of learning happen?  The Cengage eBooks have embedded video, use HTML5, etc.  Look at the reading experience itself, not the devices.   He expressed deep concern about advertising making its way into eBooks, particularly in the Google Books project.  What if eTextBooks showed reports to teachers of what the students have actually read, how they’re doing on quizzes, etc.?  Scholarly works – how does one do profitable publishing of “boring stuff”?  Stephen emphasized the idiocy of the Google “single station per library” model.  Amazon squashed Lendle, a Kindle book lending program.  And Stephen pointed out that lending content—isn’t that something we do?  Device issues are huge.  Are we okay with Steve Jobs deciding what we read?   Stephen feels like there’s less concern about the craziness of eBook standards.  We’re in a renaissance for formats and standards.  We don’t want to re-create all of the compromises of the codex in the 19th century.  The Enterouge Edge reader is a dual screen reader.  Librarians need to understand the US FCC Whitespace Broadband decision.  We need to be mindful of mobile dominance, geo-awareness, wireless as a business strategy, and that the largest generation is here and using this technology now.  What are we doing promoting a minority-based learning style (end to end text-based learning) to the majority of our users?  If we keep fighting all of our battles with publishers on text-based books, we’re failing as librarians.  Multimedia and integration is the future.  What is a book?  Why do people read?  And how do we engage with all of the opportunities we have in front of us now?  Serve everyone!  We have to move faster.  Try to influence the ecosystem on a large scale.  Work with your consortium to effect change.  Let’s move faster together!

David Lee King, Nate Hill, and I presented a session on making user interactions rock.

David’s half of the session was a discussion of “meta-social.”  How do you connect with your users?  David has a list of 8 metasocial tools.

#1: status updates.  Answer questions, ask questions, market the library’s events and services, share multi-media, All of this equals real connections to your customers.  David’s library posted a user comment from their physical comment box about the art gallery, and the artist commented back, then a user…libraries connecting customers to the content creator.

#2: long posts.  Blogs are examples of this, even Facebook notes, longer descriptions under a Flickr photo.  It’s a way to share ideas in a longer format—events, thoughts, reviews, new materials.  David’s library’s local history department posted a photo of a cupola from a building in the town that was demolished.  They wrote about it in a blog post on their website, talking about the demolition and how they got the artifacts.

#3: comments.  All of those status updates and longer posts don’t live in a vacuum.  Comment back and have a conversation with users who are commenting to you.  On one of the library’s children’s blogs, the author commented back on a post about his/her book.

#4: visuals.  This can include photos and videos.  Blip.tv, YouTube, and Vimeo are the usual suspects for video.  Flickr and Picassa are most-used for photos.  And this multi-media visual content can be embedded in many places.  David showed a neat photo from the library’s “edible books” program.  It’s a way to extend a physical event, getting more customer interaction and use online than you probably did in-person.

#5: livestreaming.  This allows people to watch moments as they happen.  David suggests livestreaming library events.

#6: friending and subscribing (aka following or liking).  This lets users tell you they like you, but it also is a way for you to show that love back to your users.

#7:  checking in.  Yelp, Facebook Places, Foursquare, Gowalla.  You can do this at the library, having good tips for your library’s services.

#8: quick stuff.  Rating, liking, favoriting, digging, poking, starring.  These are very informal quick interactions that tell you how much people like or don’t like something you’re doing.  You can embed Facebook liking into your website.

Suggestions for starting out with social media. The first tip is to stop. You need some goals and strategy.  Otherwise you’ll do it for a few months and then give up, and your site will live on, inactive and not useful.  What are you going to put out there, who is going to do the work, how do you want to respond to people interacting with you?  Listen to see if people are talking about you and read what they’re saying – on Twitter, Google Alerts, Flickr tags, etc.  You want friends!  So let people friend you and friend them back.  Focus on people living in your service area. Follow your customers first, not non-local figures like, say, other librarians.  Think about your posts as conversation starters.  Ask what your users think to encourage participation.  Customers love social media, they’re already there, and they’re waiting for someone to start the conversation.  That person is you.

This session was presented by Margeaux Johnson, Nicholas Rejack, Alex Rockwell, and Paul Albert.

Margeaux started by talking about VIVO’s origins.  It is not launched completely yet, but is being used and tested at many institutions.  It helps researchers discover other researchers.  It originated at Cornell and was made open source.  It was funded by a $12.5 million grant.  It is constituted of 120+ people at dozens of public and private institutions.  VIVO harvests data from verified sources like PubMed, Human Resources databases, organizational charts, and a grant data repository.  This data is stored as RDF and then made available as webpages.  VIVO will allow researchers to map colleagues, showcase credentials and skills, connect with researchers in their areas, simplify reporting tasks, and in the future will self-create CVs and incorporate external data sources and applications. So why involve libraries and librarians?  Libraries are neutral trusted entities, technology centers, and have a tradition of service and support.  Librarians know their organizations, can establish and maintain relationships with their clients, understand their users, and are willing to collaborate.  There is a VIVO Conference here in DC in August, where you can learn a ton more.

Nick then talked about why the semantic web was chosen for this project.  The local data flow in VIVO is relatively simple.  And a cool feature allows all 7 operational VIVOs connecting with each other, somewhat similar to a federated search technology.  Because the data is authoritative, they use URIs to track data about individual people within the system.

Paul then covered how VIVO ontology is structured.  The data in VIVO is stored using Resource Description Framework.  A sample semantic representation of the system’s data was displayed, connecting people who wrote articles together.  VIVO can create inferences for you as well.  Different ways of classifying data: Dublin Core, Event ontology, FOAF, Geopolitical classifications, SKOS, BIBO.  Several very complicated charts were displayed showing how different data in VIVO is connected.  So for modeling a person, you’re going to have the person’s research, teaching, services, and expertise in their data set.  Different localizations are required by different institutions.  He described how to create localization in VIVO, but gave the caveat that this functionality will not necessarily work across institutions.  He recommends a book entitled Semantic Web for the Working Ontologist .

Nick talked about the importance of authoritative data in VIVO, of preserving the quality of the data.  There are many different kinds of data: databases, CSV, XXML, XSLT, RDF, etc.  These all go through a loading process.  Load the desired ontologies.  Upload the data into VIVO.  Map the data to the ontology.  And finally go through data sanitation to fix the mistakes and inconsistencies.

Alex concluded the session by talking about the ins and outs of VIVO.  How do you work with VIVO data?  The easiest way is to crawl the RDF.  You can also utilize SPARQL queries.  The University of Florida doesn’t have a facility to create organization charts.  What they do have is in different types of inaccessible formats.  So they hand-curated the charts, and when Alex wrote the program to handle this there were 500 and now there are over 1000 people in the program.  The design includes a data crawl, serialization, formatting, and then exports into text, graph visualization, etc.  VIVO also has a WordPress plug-in that exports data into WordPress sites and blogs.  Cornell had a Drupal site, and a module for import of ViVO data was created.  They’re working on developer APIs to expose VIVO data as XML or JSON, to install a SPARQL file, etc.  He also created an application called Report Saver which lets you enter a SPARQL query, save it, and pull out data on a regular basis for analysis.

This session was presented by Emily Wheeler and Samara Omundson.

In 2009 the digital universe grew immensely.  If you picture a stack of DVDs reaching to the moon and back, that’s how much data growth we had.  By 2020 it is estimated that the 44 times the size it was in 2009.  We are drowning in data.  How can we make sense of it?  Apply structure to large quantities of data to help make sense of it.  Information professionals can lead the way through these piles of data, even if we’re not statistics junkies or graphic designers.  We know how to sift through vast quantities of data and pull out those few salient data points.  Data conveys a clear message, cuts through the chaos, and helps to engage and inform stakeholders.

One strategy for information visualization is using topic clusters.  As an example, they searched for “Bieber Fever” with a general search engine.  They displayed results in a hierarchical format using a single PowerPoint slide.  Another visualization was a branching choice – almost a spider web of information circulating out from a central point.  Another strategy is using time series visualizations.  These can be line graphics or bar graphs of a particular data point’s change over time.  You highlight an intersection in searches using Search Associations.  You can do this in spreadsheet tools, but they used a great tool called TouchGraph to create some really nice relational graphics.

How do you handle text analysis differently?  Keyword frequency is very useful to identify repeated keywords—a simple word count provides this data point. Creating a bar graph quickly with the number of mentions of various words can show the relative importance or permeation of various words and ideas.  You can use Tagzedo to create good keyword clouds.  By adding word association to simple keyword frequency you can see relationships between words and concepts.  Using different colors, sizes, and boldness of visualization elements can communicate the relative importance quickly.  Structural data, like keyword associations, focus on word order.  This helps you drill down into the context of a given word.  They used IBM’s ManyEyes tool to create a really nice looking structural chart.  Looking at social media data, Twitter and Facebook activity and followers, can tell a really compelling story about how social interaction and popularity relate to frequency of posting, where you post, etc.  They built a few visualizations in Adobe Illustrator.  Visuals tell a story, they show patterns.  They touched on Infographics quickly.  An infographic is a visual representation of information, but most are designed to tell a visual story of pretty complex data.  You see these in large media outlets in articles and in lead stories.  It is really easy to transform data – try it out!

Tips and tricks: Know your message, stay simple, and experiment with data visualizations whenever you can.

This panel had a whopping 5 presenters in 60 minutes.  Wow!

Ran Hock: Many real-time search engines cease to exist just as quickly as they were created.  Bing Social Search is an interesting experiment with real-time search.  Google has several real-time projects in its databases.  Google wild card words lets you search for words within phrases.  You can use an asterisk as a placeholder for an unknown or variable word.  You can use multiple asterisks in one search as well.  Google does some really good stuff with automatic stemming and synonyms.  But sometimes those terms are unrelated to your goal.  To get just price as in “gary price” add a plus sign à +price (just price, no stemming).  You can also precede a word with a tilde to get more synonyms.  Google Books full text indexing of the Full View Books is great.  There is a “read on your device” link that provides a mobile-friendly version of the books.  Google Language Tools, like Search Across Languages, Translate Text, Translate a Web Page, and the Google interface in over 120 languages.  Google has calculators and controls.  Ask sometimes works.

Gary Price: Web Cite lets you take any URL, add it to the service, and it creates a permanent archive to the page. *nice!*   One of the great tools for PCs is called Website Watcher.  This lets you see any webpage on any website and tracks every single miniscule change.  Change Detection lets you track changes too (and works on Macs) but only checks once a day.  Fuse Labs is a Microsoft labs service.  Microsoft Academic provides a lot of scholarly information you can’t find elsewhere.  Not only do you get a citation, but you get links to others who are citing that paper as well. (pretty sweet)  Pinboard has been referred to as “del.icio.us on steroids.”  You can bookmark and tag things, but also have it automatically bookmark and tag anything you Tweet with a link in it.  There’s a mobile version too.  Journal TOCS comes from the UK and is a service that provides tables of contents for free, focusing on open access publications primarily right now.  Topsy is one real-time search company that is doing well – creates an archive of Tweets.  The archive goes back 3 or 4 years right now.  Three more: BASE, Issue Map, and Many Eyes (no time to describe, but go look at them!)

Marcy Phelps: Marcy discussed adding value to your search results.  Her presentation is at http://PhelpsRsearch.com/cil2011.  In an age of diminishing resources, researchers need to surface their value and think: can you be replaced?  What can we do that Google, Watson, and other search tools cannot?  Information professionals are uniquely qualified to add the kind of analysis that adds value.  We can make comparisons, look at patterns, chunk content together, prove or disprove hypotheses, and answer that bottom-line question: so what?  We have to listen to our customers.  What would be valuable to them?  Would it help to have this in a certain format?  Once we ask those questions we need to shut up and listen.  We can create research products that are helpful for others, like Issues Tracker or Know Before You Go.   Here’s how to add value.  Add a table of contents.  Add an executive summary (one page, bulleted).  Add a cover memo listing the purpose of the report, methods used, and any issues raised.  Then in 25 words or less report your findings.  Add quick article summaries to the report.  Add meaning to boring numbers – add charts and infographics.  Building a dashboard with some pretty charts – just do it in Word.  Try different views of information.  Don’t give interview by interview summaries, summarize all the answers to one question in one spot.  You can also add a matrix of data, a timeline, whatever makes sense for what you’re presenting.  Also, use specialized tools to help you do your work.  Use Google Trends, pre-formatted profiles, data mining, and fee-based sources that can get you analyzed data immediately.  Consider new formats – try PowerPoint, an in-person or phone presentation, or create a video.  Finally, create your value-added toolbox…use Word Styles, a chart gallery, templates, and with your branding.

Natasha Bergson-Michelson: Her job day and night is to teach people how to search.  She talked about simple tricks like doing filetype searches in Google.  These types of tips are awesome, but our users don’t remember them.  She says someone recommended the following to her: imagine your perfect source before you start searching.  So she started teaching this method to her students.  The first part is that if you’re using a search engine, imagine the answer and not the question.  Use the search terms and phrases that would appear in the answer.  This is the big thing…just stop to think before searching.  Use quotation marks for phrase searching as well.  You can search for dates in Google too – e.g. “1995..2010” the .. looks for every number within the range (nice!).   She gave a great tip for finding books by color – do a Google Image search for the topic or title of the books, e.g. “Rosa Parks” and then go into the lower left corner and limit by color (pink) and then voila, you get possibly relevant book covers.

Tamas Doskocz: What is semantic search?  A search, a question, or an action that produces meaningful results even when the retrieved items contain none of the query terms or the search involves no query text at all.  Semantic search is “what is possible with today’s technologies for search.”  Google recipes search is one example of an attempt at this.  Link people, algorithms, the social web, information, machine understandable and processable forms, etc.  There are a number of semantic search engines that focus on different disciplines.  These specialized engines do a better job with that type of data.  An example might be HealthMash, a system driven by consumer health knowledge bases and performs semantic searches quite successfully.

Greg Notess presented this popular session.  We are pretty much down to Google and Bing.  Yahoo is being powered by Bing.  Ask is contracting its database out to some yet unnamed company, and has been focusing on its Q&A technologies.  Cuil is gone.  Smaller search engines like Blekko, Exalead, and Gigablast are out there but nothing is at the level, size, and scope of Google and Bing.

Death of search?  We’ve seen the behavior of searchers change over the years.  Content farming is having a detrimental effect on the accuracy and clarity of search engine results.  There is a huge economic side to this (advertising).  eHow and Wikipedia don’t have bad information necessarily, but you need to be cautious.  So who qualifies as content farmers?  Allexperts, ChaCha, Answerbag, Mahalo, eHow, Encyclopedia.com, 123people, FixYa, Seed, ShopWiki, and more.  Associated Content was purchased for $100 million by Yahoo.  AOL starts up Seed and then buys the Huffington Post for $300 million.  Demand Media had an IPO that had an over $1 billion valuation.

What are the content farmers writing about?  When we start to recognize content farm materials, know that they were very quickly created by the writers…which definitely effects quality.  There is also a lot of screen-scraping happening –near duplicate content of original content on aggregator websites.

Google has had some major changes recently.  They launched Panda Update, which was an attempt to target the content farm sites.  This has changed more than 11% of the results through Google.  How well did it work?  Many domains lost ranking: ezinearticles.com, associated content, and others.  But eHow, one of the most egregious examples of a content farm, actually gained a little bit of traction in results.  Hmmm…

Google blocking is useful: choosing to block all results from a particular domain in your search results.  But do you really benefit from this?  Sites change their content completely, which could happen on some of these content farm sites.  3 or 4 years from now will you remember which sites you blocked?

The little stars that let you “favorite/bookmark” a site in search results are now gone in Google.

The big change in this last year with Google is the sidebar.  There is a list of only a few select databases (everything, images, video, etc.) but you need to click on More to see a full list, something easily missed.  One of the new databases is “Recipes” but be aware that these only show sites that have used special Google mark-up language to be included.

Google has been working hard to have better date information about their search results (when information first posted).  It’s an easy way to limit results to only recent resources.

There are some other options in the sidebar, like “Social” which requires you to be logged in and to have a Google profile set up.

Greg also showed the many additional options in the Advanced Search page.

Google calls the “sponsored results” “Ads” now.  Yay!  Clear language.

Greg talked about Google Instant as well…a technology that saves Google processor time and money, but not really a benefit for the user, says Greg.  With Google Instant on you only get 5 recommended search terms as you type in the search box, whereas without Instant you get 10.   Chrome now has in the Omni box (the URL bar) the ability to do instant search too.

Google Preview gives you the little magnifying glass next to the search result.  This is really a copy of something Bing was doing.  You can’t turn them on or off.  Most people dislike them in our session room.

Google Encrypted Search – if you are at a firewalled location, you can encrypt your searches so that your internet service provider can’t see what you’re searching.  Only Google can (muah ha ha ha ha).

What features did we lose from Google?  Search wiki, the little bookmark stars.  The top toolbar changed a bit.

Social searching that shows up in Google is a lot of different sites, but not Facebook.

With Bing, if you have Facebook Connect, you can see that information in your results. Blekko shows pages and sites that have been liked by friends on Facebook.

If you have a large Facebook network, this is useful.  If not, then not so much.

Bing still has the cached page link (though it moved).  It also allows you to share search results through Twitter, Facebook, etc.   They have scholarly searching as well that pulls in Microsoft Academic Search data.  The image search pulls in particularly different search results than Google Image Search does.

Greg recommends looking at your search engine preferences, including the ability to see what types of ads are being served up to you and why.  You can opt out of this so that you’re not profiled for ads.

Blekko is interesting – Greg suggests trying out /liberal and /conservative.  He also recommends looking at Qwiki.

Mary Ellen Bates started her session talking about Google’s search operator “AROUND (#)” (yes, use all caps and put in a number for the distance between the two words, e.g. cats AROUND (5) toys).

Google Books has done data mining and through the Ngram viewer you can compare word usage over time, phrases as well.  Example: comparing the usage of kindergarten, child care, and nursery school.

Google is coping with content farms.  So there’s a nice trick to block crappy sites.  The option to block a domain shows up underneath each search result on the personal search results page.  If you click on that, then every time you do a search on Google while logged in that site’s results are blocked.  They support Firefox, Chrome, and Internet Explorer (they say) but it only seems to work in Chrome, says Mary Ellen. Fishy, eh?

Using Wikipedia’s “concepts related to…” list can be helpful in pointing you to related topics, especially when beginning your research.  She points out that it’s a way of getting a bigger sense of the ecology of the information environment and finding that hidden disruptor on the periphery.

Yahoo does demographics, sort of.  Yahoo Clues shows queries by age, gender, what they searched before and after the search in question.

A nice feature in Bing is the NEAR operator between two words (autism near:5 vaccination).  Bing also lets you limit a search to sites linked to from a specific URL: link:fromdomain:alzheimers.org trials.

DuckDuckGo.com provides good disambiguation.  As soon as you type in your query it shows you all of the alternative uses of the word.  The page also live-loads at the bottom, so you don’t have to click to get to the next page.  And they don’t track your search results.

Blekko is a site that Mary Ellen likes.  Blekko blocks spam and content farms like eHow and allexperts.  The search results are therefore cleaner.  Also, is that it offers specialized slash tags – e.g. “/likes” which only returns pages your Facebook friends have liked (requires Facebook Connect).  /relevance does a relevance sort and /date does a date sort.  /rank gives you some additional information on why the sites are ranked the way they are.  This could be super-useful for SEO if you’re trying to raise your library’s pages rank for certain searches.

Waybackmachine.org [is awesome!] and lets you scroll back through time and skim through the life of a website.

Mary Ellen plugged Yelp as well.  The reviews have much, much useful information on local businesses.

FaganFinder.com has been around for a decade, and has been rejuvenuated lately.  It’s an aggregation of what the author considers the best places to go for particular types of information.  If you’re trying to educate your users that there is life beyond Google, this is a good starting point for them.

When is good enough good enough?  She asks us: would you rather be perfect or successful?  How much is this information worth?

Samepoint.com provides a good social search.  You see aggregated and filtered results.  They associate positive and negative words with your search terms as well to rate how well the word/brand is rated.

Topsy is also a social search engine and lets you search for hashtags.  You can limit it by time and by date.  It’s searching pages that were linked to from Tweets.  Tweeps is a good way of getting senses of who follows the people you follow, and other social relationships in your extended circle.  Mary Ellen emphasized the importance of social search resources.

James Crawford, Engineering Director for Google Books, was set to present this morning’s keynote.  However, Mr. Crawford chose to fly in on the red-eye, which was of course delayed, and he didn’t make it here for his talk.  This provides a good lesson for all speakers – don’t play to fly in a few hours before you’re supposed to speak.

Instead, Information Today quickly and fearlessly put together a panel of experts to discuss the topic: Roy Tenant, Dick Kaser, Stephen Abram, and Marshall Breeding.   I want to give kudos to the panelists for giving a salient discussion.  Good job guys!

Marshall Breeding started by talking about the (until recently remote) idea of digitizing the world’s books.  Very few libraries have the ability to digitize and provide full text discovery.  Companies like Google are the only ones digging in to this.

Stephen Abram posited “What are the unintended consequences of this Google Books project?”  There are 15 million books online now, more than any other library except the Library of Congress.  Once we separate the entertainment group of materials from the answer/research group of materials, we have a dangerous bifurcation.  What’s the difference between the chapter of a scholarly work and an article?  Are we going to start aggregating chapters together?  Library catalogs cannot handle this kind of material – how do you describe a 12-chapter book with only 3 subject headings?  The free text aspects of searching Google is going to change the dynamics of the answer space.  And those books aren’t going to be  a “book database,” but rather fully integrated with websites, video, articles, etc.

Dick Kaser compared this to the digitization of journals years ago.  We do this because we can – we have the storage space, the ability to digitize rapidly, and hey – Google has the money!  There was some controversy over those first libraries that signed on with Google.  The conventional library wisdom is to never trust a commercial vendor.  If Google sits on top of the vast amount if data and information, what’s then left for libraries in this space?  Perhaps libraries helping people digitize their own collections, digitizing rare local materials.  Maybe Google holds the books and the data, but perhaps libraries help out with how to search them effectively.

Years ago, Roy Tenant debated here at CIL that the Library of Congress would never be fully digitized.  He says he’s ready to eat his hat on that one.  He then brought the Internet Archive and the Hathi Trust into the conversation as well.

Stephen Abram responded that you need to question why you digitize books.   Is it perhaps to be able to put ads into books?  The President of Demand Media said it would be brilliant to digitize books as a way to gather data in order to game the search engine results.   How do you drive search results based on things that are already written?  How many people paid some of the billions of dollars in Google’s profits?  What are the consequences of a book database that serves up answers based on the needs of the advertisers – the people paying their bills.

Marshall Breeding noted that many libraries have very small collections, and having access to nearly countless book titles is very tempting.  The initial Google Books contracts with libraries did not give enough rights to libraries, and that has been a topic of a lot of conversations.  Subsequent partners for digitizing projects have asked for more after learning those early lessons.  Internet Archive hasn’t done the quantity of books that Google has, but the IA approaches the projects in a more library-friendly way.  The library has to pay a small fee for the digitization costs, but the business model on the output is certainly more library-friendly and rights-friendly.   How do we find the right deals that give library users the best deal?

Dick Kaser, in looking at the commercial side of digitization, has seen more publishers talking about the potential of digital books. eBook standards were a  key topic years ago, but now it just seems that approaching things in HTML5 is the easier and more expedient approach.  [and tee hee! Dick mentioned the eBook Bill of Rights that Andy Woodworth and I worked on!]

At some point in 2011, the U.S. Supreme Court will make some decision about the in-copyright books in Google Books.  ALA’s concerns are about one commercial entity having control over these books, and the ridiculous nature of the “one terminal per library” set-up with no printing or downloading rights.  Are we okay with Google having control over access to this information?  How are we going to handle this as libraries? How are we going to advocate for our users’ rights?

Roy Tenant then brought up the “26 checkout eBook rental” issue with HarperCollins (see #hcod on Twitter for more on this).  Dick said that the idea of lending an eBook is “disrupting publishers” because they’re so into tangible objects.  If they allow it to be loaned, they feel that they have lost control of their product.  Marshall responded that this brings up the problem of libraries’ automation systems.  These systems are not built to deal well with digital content, only physical items.  He thinks this is going to change.  What is the library’s role when everything is streaming?  When books are published digitally only, and not in print?  We are trying to figure out what a lending model for libraries can be.  There is a real struggle between what publishers are worried about and their feeling that libraries are in the way of that.  We figured out how to do it in an age of physical bookstores and we need to figure it out in this new environment as well.  Stephen says that the HarperCollins issue is one example of playing whack-a-mole.  What would we do without Sarah Palin’s book…omg!  Why are we only going after HarperCollins, and not after Simon & Schuster and Macmillan, who won’t let us loan eBooks at all?  If we don’t participate in the discussion, there is a danger that libraries won’t have a role in eBooks in the future.

Marshall and Roy then talked about the impact on research libraries.  How do you manage large digitized collections in large research libraries?  Does it mean you can ship more books to storage or maybe even get rid of a few?  Back-files of periodicals don’t exist in many research libraries anymore, and if they do it’s only in storage.   The working collections will likely become more limited and agile.  It provides new opportunities to think about what library spaces can be.

Stephen talked a bit about eReaders.  Do we want Jeff Bezos controlling the market?  Do we want Steve Jobs’s values system controlling what type of content is allowed on the market? Because you control the patent on an eReader or have control over market share, should you have the right to disrupt the market and disallow content and information access.  He asked: How many of you would allow a single person to ban a book in your library?  Do we need to add a Banned eBook Week through ALA?  Did we let telephone handset manufacturers tell us what we could say on the phone? Stephen then asked the question that has been riling me up for years: Why is the library profession so silent on an issue of such critical importance to the future of information?

The American Library Association elections are afoot, and as a member of ALA and LITA (Library and Information Technology Association) I wanted to chime in with a few recommendations of people whose work I know and trust.  Voting was supposed to open today (the elections website still says so) but so far there’s nothing up on ALA’s site yet.  Stay tuned though.

Please join me in voting for the following strong-minded and brilliant people:

ALA Council: Bobbi Newman, Holly Tomren, Wendy Stephens, Martin Garnar, Matthew Ciszek, and Kate Kosturski

LITA Vice President/President-Elect: Zoe Stewart-Marshall

LITA Board: David Lee KingLauren Pressley, and John Blyberg

As the debates rage about digital content, publishers, consumers, and libraries, I was reminded of a piece on DRM I wrote years ago.  In June of 2007 I published an article in School Library Journal entitled “Imagine No Restrictions: Digital Rights Management.” The opening line of my article is “We dream of a world with free access to content.  In the meantime, there’s DRM.”  *sigh*  Sadly, that is still true.

I wrote the article as a primer for library staff on what DRM is and why it matters for libraries.  In re-reading that article from long before eReaders became popular, I was struck by how applicable the ideas still are today.  The article covers device compatibility, DRM as a roadblock to use, and the archival issues raised for libraries.  I also provide talking points for discussing DRM and library digital content with library users…something we all end up doing, often with a frowny face and a serious sense of guilt.

If you’ll indulge me, here are two passages from the article that seemed to bear repeating, in addition to my grumpy epithet for DRM: “Despicable Rights Meddling”:

If you buy a physical version of a song or movie, you are warned about the law, but generally trusted to follow it. If you buy a digital version, however, the DRM code forces compliance.

—–

Libraries must be part of the solution here, not the problem. With our current econtent models, we’re coming down on the wrong side of this debate—not the side of content delivery, accessibility, and customer service, like we should. Publishing companies like Springer and BWI are offering ebooks free of DRM. This is the model we should be promoting and demanding from all vendors. Otherwise we will continue to limit content to a select group of our users, and that select group will continue to get smaller as DRM becomes increasingly restrictive.

This article was a reminder to me that we’ve been discussing this issue for many years.  Librarians have cared about access to digital content since digital content was invented.  We have worked to educate staff and customers.  We have asked for leadership from our professional organizations in legislating change or working with the Librarian of Congress to make effective changes to the Digital Millennium Copyright Act.

And still, we wait.  But now we’re mad, we’re organized, and we’re pushing for change from all directions…not waiting for approval or sanctions from above.  As a friend said to me last night, “Librarians are going all Egypt on this one.”   In response to both the HarperCollins eBook licensing changes and the eBook User’s Bill of Rights we’ve seen grassroots efforts from the masses, opinions from all sides, and social media organization and information dissemination.  My favorite is the Librarians Against DRM and Readers Against DRM graphics (designed by cartoonist and QuestionCopyright.org artist-in-residence Nina Paley).  We’ve also seen mass media coverage on a scale I haven’t seen since the PATRIOT ACT.  And it all started with opinionated librarians blogging, tweeting, and some great investigative reporting from Josh Hadro at Library Journal.

I am very happy that American Library Association is now moving forward with a game plan for advocacy, including the work of the ALA eBooks Taskforce I’m a part of.  In the next couple of weeks I expect we will see more on these issues — from ALA and from librarians directly.

I encourage you to inform yourself, inform your co-workers, and push for change.  Call and write to publishers, authors, and your larger library organizations like consortia or regional partnerships.  Your voice matters.  Your voice can indeed create change.  Continue the revolution.