I am going to do what few librarians ever do. I am going to be perfectly frank about negative experiences with a vendor.  Consequences be damned.

I remember Meredith Farkas’s courageous post about EBSCO’s unethical practices in April of last year. There were repercussions for her post, but she met her goal of having the library community talking about EBSCO as a vendor to libraries.

Historically, librarians only publicly post about the positive experiences they’ve had with vendors. The negatives seem to only come out at late night drinking fests at conferences or in private conversations. I believe this lack of willingness to speak frankly about the companies we buy stuff from is due to three problems:

  1. As a profession, we’re generally nice people and don’t like to talk smack about anyone. This is generally a wonderful trait, but when we’re talking about allocating our scarce resources it can be extremely detrimental.
  2. Librarians are afraid of repercussions at work, including being disciplined, yelled at, or just plain fired.
  3. Librarians are afraid of the vendors, who they think might give them worse prices and support if they bad-mouth the product.

But hey, this is me. I’m tired of being quiet. And I’m not pulling any punches now that I don’t have to in order to preserve my job.

I’m going to tell you about my experiences with Freegal, and why Freegal is a bad investment for libraries.

I’ve worked with Freegal. I have dealt with the sales and support staff, used it as a user and as an administrator, I reviewed stats, and I fielded patron comments/complaints. And one of the reasons I’m posting this now is that Freegal is now cold-calling and cold-visiting my area’s libraries, and doing the typical sales dance that hides the limitations and problems of the product. A visit happened to my library yesterday, where the sales rep asked for our director for a cold-call sales pitch in person. I am pretty sure he didn’t realize that I now work for this library instead of my previous employer.

If I’m being polite, I would say that Freegal is not a wise investment for your library. If I’m being honest with you over a beer, I’ll say that Freegal is a frelling piece of junk and I don’t trust the company.

But whether you prefer the first or second mode of expression, the message is the same. Libraries have been lured into purchasing Freegal by well-crafted sales pitches, promises of future enhancements, and a false god of shiny technology.

Background
Freegal is a product that offers libraries downloads of music, from Sony, as DRM-free MP3s (and DRM-free MP3s are awesome; let me be clear about that). Freegal Music is owned by Library Ideas, LLC. Library Ideas, LLC is a company that has a single webpage. Freegal also only has a single webpage. Now, if my librarian skills are telling me anything, I’d be a bit skeptical of giving tens or hundreds of thousands of dollars to a company that has so sparse a web presence.

The last library I worked for purchased a year’s worth block of downloads from Freegal. The staff worked with it for a while and we had some bad experiences, including many complaints from customers.

I wrote a post in September 2010 about libraries and digital music: “Music in Libraries: We’re Doing It Wrong.” The post mentioned Freegal, as well as Overdrive and Alexander Street Music. Freegal did not like what I had to say. Comments were made on my blog, and a call was placed to someone above me in the library. This call from Freegal included asking my supervisor to tell me to remove my post, or at least revise it. I said no, and thankfully my director backed me up. This blog was and is a journalistic enterprise of my own making, done on my own time. If I want to comment on a vendor’s product or service, I can do so. We all can.

My goal here is to peel back the veil, to give you an honest assessment in my opinion, and to educate people about what exactly it is that Freegal has for libraries.

Why are we buying pay-per-use MP3s?
I believe that without really thinking it through, libraries who have subscribed to Freegal have created a fundamental change in their library’s collection policy and expenditures.

Libraries typically buy one copy of something, and then lend it out to multiple users sequentially, in order to get a good return on investment. Participating in a product like Freegal means that we’re not lending anymore, we’re buying content for users to own permanently so they don’t have to pay the vendor directly themselves. This puts us in direct competition with the vendor’s sales directly to consumers, and the vendors will never make more money off of libraries than they will off of direct consumer sales.

What that does is put libraries in a position of being economic victims of our own success. I would think that libraries would remember this lesson from our difficulties with the FirstSearch pay-per-use model that most of us found to be unsustainable.

The more popular our service gets, the less able we are to meet demand. The private industry is moving away from pay-per-use. Netflix, Rdio, etc. are offering all-you-can-eat models instead on a subscription model. Freegal puts us in danger of manufacturing a demand we can no longer meet. The pay-per-use model undermines the economic power of libraries, which is to aggregate our communities’ buying power and take advantage of the economies of scale. I believe strongly that buying a product like Freegal could be damaging to libraries’ long-term economic sustainability.

Selection
It’s important to realize that Freegal only offers Sony music. That’s it – one record label. Despite promises from the very beginning that they’re “just about to add new record labels,” they have not offered anything other than Sony music. To me, offering a service that only presents one publisher’s offerings is not in keeping with our collection policies that require non-preferential selection. Also please realize that not all Sony music is offered in Freegal. It’s a select number of albums from a select number of artists.

Cost
Freegal offers two models for purchase:

  • A library can buy a block of downloads, a set number for the whole year, but once those are gone they’re gone. You can either set a library-wide weekly cap (divide your yearly total by 52) or choose no cap. Users are limited to 3 downloads per week.
  • A library can buy the “unlimited plan” (which isn’t really unlimited) which lets any library user download 3 per week, with no weekly or yearly cap. And you guessed it, this option is significantly more expensive. For a large urban library, the quoted cost was well over $100,000—over half of the library’s entire eResources budget.

This pricing model is not sustainable for libraries. From the dozen librarians I’ve spoken with at libraries with Freegal, it seems that the average cost per song is on par with a consumer-cost for a song ($1) or slightly more ($1.10 or so). If we really want to offer users a wide selection of MP3s they can keep forever, wouldn’t it be cheaper and smarter to simply buy users 3 MP3s from any record label through Amazon or iTunes?

Feedback on Freegal from Users
We had hundreds of complaints from users about Freegal at my last library.  I have heard the same thing from most of the other Freegal libraries I’ve spoken with.  The main complaints I heard were that the 3 songs per week limit was not sufficient. We heard repeatedly that having to schedule yourself out for a month’s worth of downloads to get a full album was not realistic. Other users were severely unhappy that the collection was only Sony, and some pointed out the preferential treatment asking if we’d been paid off to just offer Sony music. Others reported that they loved the service at first, but found the weekly limit per person to be just plain annoying and stopped coming back to the site after a few weeks. Persistent access, not temporally limited, is the expected norm – and when a service doesn’t provide that, it creates less interest in repeated visits and use.

Customer Service
I forwarded user comments, questions, and complaints to Freegal. I got no response. Even though I was named as the contact for my library, the company persisted in contacting another librarian (who was the initial sales contact) for all issues.  It was as if I didn’t exist.  Even after a face-to-face meeting where the rep was told by our administration to only work with me, he continued to exclude me. It was like a junior high shunning.  Technical problems that users encountered were never addressed. It was like sending questions and emails into a vacuum. A company that behaves that way does not get a gold star in my book.

False Promises and Bullying
In addition to the bad customer service issues listed above, I also question the overall ethics of the company and its sales staff. Numerous promises were made to me (“off the record of course”) that new content from other record labels would be added in the next few months. It’s always “in the next few months” and yet that elusive day never seems to come. My advice is to never, ever buy a product based on features or content that is not explicitly included in the contract.

The company also swears that they are giving you an amazing deal that no other library receives. And guess what? Everybody’s getting that same deal. This happens with a lot of vendors. Remember: their job is to sell. Negotiating with vendors is sadly often more like negotiating for a used car than it is like buying a fixed-price item off the shelf.

I condemn the company’s decision to contact my supervisor to tell me to take down my post that mentioned their product.  Even if this had been on a library blog that I’d written on library time, that action is a very poor way to respond to criticisms of your product.  But the fact that this blog is mine and not the library’s made it indefensible.

The salesperson I worked with also told me that the contract terms, including pricing, was “strictly confidential” and that it could not go out of the room. Heads up: unless you signed a non-disclosure agreement with a company, you can discuss any contract terms with whoever you want. That kind of meaningless and baseless bullying from a vendor sits very poorly with me.

So why does Freegal continue to be such a big temptation for libraries?
One word: technolust. Libraries are so keen to offer digital content to their users, a worthy and long-overdue goal. But Freegal’s model is simply not the way to be successful for digital music.  We need to wait for, or create ourselves, something that does work for libraries.  Freegal doesn’t work because:

  • Select songs from one record label is not the way to be successful.
  • A costly and unsustainable pay-per-use model is not the way to be successful.
  • Low limits per week for users is not the way to be successful.
  • And working with a vendor that is a bit shady and unresponsive is not the way to be successful.

Am I wrong?
I admit that it’s possible. My own experiences may be drastically different than everyone else’s, though I’d be surprised if that was the case. If you and your library have had good experiences with Freegal, leave a comment here. If you disagree with the assertions I’ve made above, tell me why I’m wrong. If you’ve had bad experiences, please share those too—anonymously if you feel more comfortable sharing that way.  I encourage people, though, to name your library and yourself. I want to see more people talking about positives and negatives of various vendor experiences. We need to help each other as professionals to get the best deal from the best vendors out there.  Sharing information is good, right?  It’s the principle to which our profession is dedicated.  Let’s share as much as we can, the good and the bad.  In this case, unfortunately it’s the bad.

Wednesday May 4, 2011 will be the third annual international Day Against DRM.  I know you want to participate *grin*

You can currently sign up for a mailing list to get involved with Day Against DRM and a wiki was recently started too. According to the Defective by Design website:

The Day Against DRM is an opportunity to unite a wide range of projects, public interest organizations, web sites and individuals in an effort to raise public awareness to the danger of technology that requires users to give-up control of their computers or that restricts access to digital data and media. This year, we’ll be helping individuals and groups work together to create local actions in their communities — actions will range from protesting an unfriendly hardware vendor to handing out informative fliers at local public libraries!

I love it! I’ve signed up and am thinking about how my library could participate in this effort. What can you do?

Join me (along with Henry Bankhead, Mark Coker, Eli Neiburger, and Mary Minow) on Monday April 11th at noon (Pacific) / 3pm (Eastern) for a raucous discussion on eBooks and library lending.  If you have questions in advance, send them in to me!  The day of the webinar, sponsored and hosted by Infopeople, go to http://infopeople.org/training/webinar/ebooks

Monday, April 11, 2011
Start Time: Pacific – Noon, Mountain – 1:00 PM, Central – 2:00 PM, Eastern – 3:00 PM
Speakers: Henry Bankhead, Sarah Houghton-Jan, Mark Coker, Eli Neiburger, and Mary Minow

  • Is there true ownership of eBooks for libraries?
  • Can libraries exist without ownership of eBooks?
  • What is the best access model for eBooks?
  • Is there a right of first sale that applies to eBooks?

The recent decision by HarperCollins to switch to a licensing model for eBooks has the public library world in an uproar and has spawned numerous boycotts of HarperCollins by public libraries. Instead of allowing libraries to “purchase” one digital copy and lend it sequentially to an unlimited number of library users, HarperCollins has instead opted to only license each of their eBooks for 26 uses. After the 26 circulations are used up, the library must purchase an additional license.

This change has primarily affected Overdrive users, the lion’s share of the public library eBook market, but will apply to all distributors of HarperCollins eBook content. More important, this shift exposes the questionable nature of ownership in the library and consumer eBook landscape.

Please join us for a lively one-hour webinar panel discussion of the role of eBooks, the public library, lending limits, ownership, the right of first use and digital copyright.

Webinars are free of charge and registration is only done on the day of the event on the WebEx server. No passwords are required.

Do you require an accommodation? Closed captioning will be provided upon request. For this service, please notify ipweb@infopeople.org at least 3 days beforethe webinar.

The following passage is from Peter F. Hamilton‘s 2008 book Misspent Youth, a fabulous piece of speculative science fiction.  I was struck by Hamilton’s description of a world where information truly is free, where digital access to data is considered to be a right and not a hypothetical dream. While I don’t see things shaking out the way he predicted, Hamilton does a fabulous job of pointing out the benefits and the possible negative effects of a 100% free datasphere (a much better name than “internet,” incidentally).  Read on, and see what this makes you think of.  How can the publishing industry fulfill the human need for information instead of fighting against it?  How can licensing for individuals and libraries work for the betterment of the artist and the consumer?  I often find that science fiction and speculative fiction get my mind going about current issues and challenges.  I get inspired.  See if this passage does for you what it did for me.

Excerpt from Peter F. Hamilton‘s Misspent Youth:

Tim produced a mildly awkward grin. He’d grown up with every byte in the datasphere being free. That was the natural way of things; instant unlimited access to all files was a fundamental human right. Restriction was the enemy. Evil. Governments restricted information cloaking their true behavior from the media and public, although enough of it leaked out anyway. He’d never really thought of the economic fallout from the macro storage capability delivered by crystal memories. It was a simple enough maxim: Everything that can be digitized can be stored and distributed across the datasphere, every file can be copied a million, a billion, times over.  Once it has been released into the public domain, it can never be recalled, providing a universal open-source community.

After the turn of the century, as slow phone line connections were replaced by broadband cables into every home, and crystal memories took over from sorely limited hard drives and rewritable CDs, more and more information was liberated from its original and singular owners.  The music industry, always in the forefront of the battle against open access, was the first to crumble.  Albums and individual tracks were already available in a dozen different electronic formats, ready to be traded and swapped.  Building up total catalogue availability took hardly any time at all.

As ultra-high-definition screens hit the market, paper books were scanned in, or had their e-book version’s encryption hacked.  Films were downloaded as soon as they hit the cinema, and on a few celebrated occasions actually before they premiered.

All of these media were provided free through distributed source networks established by anonymous enthusiasts and fanatics–even a few dedicated anticapitalists determined to burn Big Business and stop them from making “excessive profits.”  Lawyers and service providers tried to stamp it out.  At first they tried very hard.  But there was no longer a single source site to quash, no one person to threaten with fines and prison.  Information evolution meant that the files were delivered from uncountable computers that simply shared their own specialist subject architecture software.  The Internet had long ago destroyed geography.  Now the datasphere removed individuality from the electronic universe, and with it responsibility.

Excessive profits took the nosedive every open-source, Marxist, and Green idealist wanted.  Everybody who’d ever walked into a shop and grumbled about the price of a DVD or CD finally defeated the rip-off retailer and producer, accessing whatever they wanted for free.  Record companies, film studios, and publishers saw their income crash dramatically.  By 2009 band managers could no longer afford to pay for recording time, session musicians, promotional videos, and tours.  There was no money coming in from the current blockbusters to invest in the next generation, and certainly no money for art films.  Writers could still write their books, but they’d never be paid for them; the datasphere snatched them away the instant the first review copy was sent out.  New games were hacked and sent flooding through the datasphere like electronic tsunami for everyone to ride and enjoy.  Even the BBC and other public service television companies were hit as their output was channeled directly into the datasphere; nobody bothered to pay their license fee anymore.  Why should they?

After 2010, the nature of entertainment changed irrevocably, conforming to the datasphere’s dominance.  New songs were written and performed by amateurs.  Professional writers either created scripts for commercial cable television or went back to the day job and released work for free, while nonprofessional writers finally got to expose their rejected manuscripts to the world–which seemed as unappreciative as editors always had.  Games were put together by mutual interest teams, more often than not modifying and mixing pre10 originals.  Hollywood burned.  With the big time over, studios diverted their dwindling resources into cable shows, soaps, and series; they didn’t even get syndication and Saturday morning reruns anymore, let along DVD rental fees and sales.  Everything was a one-off released globally, sponsored by commercials and product placement.

It was a heritage Tim had never considered in any detail.  Then a couple of years back he’d watched Dark Sister, the adaptation of one of Graham’s novels.  The pre10 film was spooky and surprisingly suspenseful, and he’d made the error of telling Graham he quite liked it.  The novelist’s response wasn’t what he expected.  Graham held his hand out and said: “That’ll be five euros, please.”

“What?” a perplexed Tim asked.  He wanted to laugh, but Graham looked fearsomely serious.

“Five euros.  I think that’s a reasonable fee, don’t you?”

“For what?”

“I wrote the book.  I even wrote some of the screenplay.  Don’t I deserve to be paid for my time and my craft?”

“But it’s in the datasphere.  It has been for decades.”

“I didn’t put it there.”

Tim wasn’t sure what to say; he even felt slightly guilty.  After all, he’d once complained to Dad about not raking in royalties from crystal memories.  But that was different, he told himself: crystal memories were physical, Dark Sister was data, pure binary information.

“Fear not, Tim,” Graham said.  “It’s an old war now, and we were beaten.  Lost causes are the worst kind to fight.  I just enjoy a bit of agitation now and then.  At my age there’s not much fun left in life.”

Tim didn’t believe that at all.

Julian Aiken, Yale Law Library

Julian was a hilarious speaker.

Every librarian ought to be allowed to muck around with a brilliant but rummy idea.  “When we aim for the stars, we tend to bonk our heads on the ceiling.”  He proposed automated materials sorting but was denied.  He proposed barcoding his dog and checking the dog out.  He wanted to take a look at how others achieve their moments of brilliance.  “When in doubt, cheat, copy, steal, and pillage..”  If you’re snooping around for brilliance and innovation, there’s one place to go – the creators of the deep-fried Snickers bar.  So he went to Google instead for what they do.  Where does the 1% inspiration come from?  Google has their 80/20 innovation model, which has produced Google News, Gmail, and the Google shuttle buses.  This model encourages Google employees to spend 20% of their time on projects that speak to their personal interests and passions.  His immediate bosses are splendid individuals that were willing to give this model a try.  Achieving institutional buy-in for an 80/20 model can be hard.  A commonly raised objection has been that in financially tight times, it’s difficult or even irresponsible to take staff away from their core duties.  The Google Innovation model provides libraries an opportunity to reward deserving staff.  Unscrupulous institutions award promotions, awards, and bigger offices in the place of cash.  With the Google model, they’re rewarded with a variety of work that they care about, and ultimately make the workplace a pleasant and happier environment.  Is necessity truly the mother of invention?  He listed several inventions from World War II that had the audience rolling.  And then there was a pantomime horse – two ends, both alike in dignity – meant to represent different departments in a library able to work together respectfully.  He works at the Beinecke Library at Yale.  The proposal for the Google 80/20 program resulted in a lot of cross-training in the library.  They have 6 staff in his department, 4 of whom are actively participating in the program.  It’s more like 90/10 though.  Some of the projects they’re working on: digitization, open access legal scholarship repository.  He is going to get his dog circulated at the library now too :)

eBooks and Their Growing Value for Libraries (PART 1)

Chad Mairn, Amy Pawlowski, Sue Polanka, Ellen Druda

Amy: 1/5 of the US online population reads at least two books per month.  How are we going to capture that audience?  If you own a Kindle and download a book a week, you spend $500 a year on eBooks.  Where can you get them for free?  The library.  94% of academic libraries already offer eBooks.  By 2020 academic eContent expenditures will reach 80% of the collection budget.  67% of academic libraries either already offer or are planning to offer mobile device content and services.  In 2015, 25% of textbook revenue will be from digital textbooks.  Course management systems are a great way to embed our resources for students.  24/7 access is a huge boon for eContent.  eContent meets users where they are.  The eReader and tablet market is huge. 1/3 of US online customers will be using a tablet in the next few years.  Smart phones surpassed the sales of PCs on a quarterly basis.  The argument that eBooks are only for the people who can afford the devices is no longer salient.  Smart phone penetration is highest among ethnic minorities (Sarah’s Note: Socioeconomic status and ethnicity are not the same thing. I don’t understand the connection made here, and actually found it rather offensive.  It’s like saying “all the poor people in your communities are minorities.”)  Pricing models will change as the market grows.  HarperCollins was one of the first big publishers to start offering eBooks.  Things are going to change and we’re going to have to deal with it.  72% of libraries are offering eBooks, and the rest of the libraries should start thinking about it by grouping with a consortium locally.  If you have a collection and you’re not taking it seriously, you need to start.  If you don’t put new titles in there, people won’t come back to look at the collection.  Consider circulating devices.  Start planning for the future of eBooks now.

Sue: The future is not eBooks; it’s eContent.  If you have the opportunity to sit down and talk with a publisher, take advantage of the chance to tell them what’s not working.  There are two taskforces in ALA working on eContent, which we were encouraged to work with and follow.  Library Renewal is a group to be aware of – a non-profit organization founded by 5 librarians who feel very strongly about the future of digital content and the accessibility of this content to all of our users.  In Ohio, they purchase their digital content – they don’t license it.  They had to build a platform and host it themselves, which is a lot of work.  But it’s worth it to have ownership of the content.  Self-publishing is taking off.  How are we going to buy that content?  Open access is a huge opportunity for us as well.  And usher in digital textbooks.  The estimate is that the Kindle will be free by the end of this year.  Sony Reader has a library program with their readers.  Overdrive is developing an eReader Certification Program.

Ellen: eBook circulation has skyrocketed in libraries over the last few years as smart phones, tablets, and eReaders became popular devices.  At her library, the traditional book club members aren’t interested in eBooks and have trouble with the technology.  You have tech-savvy eBook readers, traditional print book readers, and everyone else somewhere in between.  Staff training was necessary for their library when they brought in eBook readers.  They had an Overdrive demo with various devices.  They did traditional marketing for eBooks: bookmarks, posters, banners, Twitter, Facebook, etc.  She mentioned the iDrakula app, a book turned into an app which came from a graphic novel originally.  They brought the author in via Skype for a back-and-forth with the library users who read the book.  Next month they’re doing a book discussion summit and trying to get traditional and eBook club members in one room.

Sujay Darji and Stephen Abram were the speakers for this session.  Sujay Darji started off by discussing his work at SWETS with eBooks.  SWETS is a subscription agent primarily known for aggregating periodicals.  So what do the content suppliers (aka publishers) need from subscription agents like SWETS?  Aggregators started approaching subscription agents wanting them to distribute their content.  A lot of small to medium sized publishers were inexperienced with eBook distribution models and purchasing models.  SWETS tried to close some of those gaps.  Subscription agents needed to decide what terms and conditions they wanted to put into place for eBooks.  What are the headaches librarians experience with eBooks?  It’s difficult to compare pricing between vendors because there are content &  “platform” fees.  It’s hard to find out what eBook titles are available and hard to compare licensing terms.  Digital rights management, dictated by publishers, is inconsistent and horrible.   How many eBook platforms are there out there?  SWETS wanted to focus on acquisition of eBooks.  His approach to eBooks is three-fold – acquire, manage, and access.  Managing millions of cross-publisher eBooks in one platform lets you navigate and control your collection without having to jump between platforms.  With the SWETS model, there is no need to create a separate platform.  Simply integrate your eBooks into your existing publicly viewable discovery systems/ILSs.  The tool is a free tool that is open to all users.  There is no platform fee, which is a relevant research tool that you can use to build your collection.

Stephen Abram’s talk was entitled Frankenbooks (LOVE IT!).   When we’re reading with a little bit of light on a screen, interaction with the screen is encouraged.  If we use a codex model to try to understand what textbooks of the future will look like, we’ll get it wrong.  How do you engage learners, researchers, teachers, curriculum heads, testers, and assessors to agree on reforming their eBook textbooks?  Most eBooks are text that you can read end to end.  Many eBooks, though, you just want to read a specific section.  Stephen showed traditional publishing bingo and electronic publishing bingo cards J  Want!  Why do people like the smell of books?  Smell is the largest memory trigger, and with books they’re remembering all of the things they learned, how they felt, etc.  How would you enhance a book?  What framework would you use?  We cannot take the old format and carry all the compromises forward.  Where do publishers move with all of the new options?  When you look at the physical act of reading, how does the act of learning happen?  The Cengage eBooks have embedded video, use HTML5, etc.  Look at the reading experience itself, not the devices.   He expressed deep concern about advertising making its way into eBooks, particularly in the Google Books project.  What if eTextBooks showed reports to teachers of what the students have actually read, how they’re doing on quizzes, etc.?  Scholarly works – how does one do profitable publishing of “boring stuff”?  Stephen emphasized the idiocy of the Google “single station per library” model.  Amazon squashed Lendle, a Kindle book lending program.  And Stephen pointed out that lending content—isn’t that something we do?  Device issues are huge.  Are we okay with Steve Jobs deciding what we read?   Stephen feels like there’s less concern about the craziness of eBook standards.  We’re in a renaissance for formats and standards.  We don’t want to re-create all of the compromises of the codex in the 19th century.  The Enterouge Edge reader is a dual screen reader.  Librarians need to understand the US FCC Whitespace Broadband decision.  We need to be mindful of mobile dominance, geo-awareness, wireless as a business strategy, and that the largest generation is here and using this technology now.  What are we doing promoting a minority-based learning style (end to end text-based learning) to the majority of our users?  If we keep fighting all of our battles with publishers on text-based books, we’re failing as librarians.  Multimedia and integration is the future.  What is a book?  Why do people read?  And how do we engage with all of the opportunities we have in front of us now?  Serve everyone!  We have to move faster.  Try to influence the ecosystem on a large scale.  Work with your consortium to effect change.  Let’s move faster together!

David Lee King, Nate Hill, and I presented a session on making user interactions rock.

David’s half of the session was a discussion of “meta-social.”  How do you connect with your users?  David has a list of 8 metasocial tools.

#1: status updates.  Answer questions, ask questions, market the library’s events and services, share multi-media, All of this equals real connections to your customers.  David’s library posted a user comment from their physical comment box about the art gallery, and the artist commented back, then a user…libraries connecting customers to the content creator.

#2: long posts.  Blogs are examples of this, even Facebook notes, longer descriptions under a Flickr photo.  It’s a way to share ideas in a longer format—events, thoughts, reviews, new materials.  David’s library’s local history department posted a photo of a cupola from a building in the town that was demolished.  They wrote about it in a blog post on their website, talking about the demolition and how they got the artifacts.

#3: comments.  All of those status updates and longer posts don’t live in a vacuum.  Comment back and have a conversation with users who are commenting to you.  On one of the library’s children’s blogs, the author commented back on a post about his/her book.

#4: visuals.  This can include photos and videos.  Blip.tv, YouTube, and Vimeo are the usual suspects for video.  Flickr and Picassa are most-used for photos.  And this multi-media visual content can be embedded in many places.  David showed a neat photo from the library’s “edible books” program.  It’s a way to extend a physical event, getting more customer interaction and use online than you probably did in-person.

#5: livestreaming.  This allows people to watch moments as they happen.  David suggests livestreaming library events.

#6: friending and subscribing (aka following or liking).  This lets users tell you they like you, but it also is a way for you to show that love back to your users.

#7:  checking in.  Yelp, Facebook Places, Foursquare, Gowalla.  You can do this at the library, having good tips for your library’s services.

#8: quick stuff.  Rating, liking, favoriting, digging, poking, starring.  These are very informal quick interactions that tell you how much people like or don’t like something you’re doing.  You can embed Facebook liking into your website.

Suggestions for starting out with social media. The first tip is to stop. You need some goals and strategy.  Otherwise you’ll do it for a few months and then give up, and your site will live on, inactive and not useful.  What are you going to put out there, who is going to do the work, how do you want to respond to people interacting with you?  Listen to see if people are talking about you and read what they’re saying – on Twitter, Google Alerts, Flickr tags, etc.  You want friends!  So let people friend you and friend them back.  Focus on people living in your service area. Follow your customers first, not non-local figures like, say, other librarians.  Think about your posts as conversation starters.  Ask what your users think to encourage participation.  Customers love social media, they’re already there, and they’re waiting for someone to start the conversation.  That person is you.

This session was presented by Margeaux Johnson, Nicholas Rejack, Alex Rockwell, and Paul Albert.

Margeaux started by talking about VIVO’s origins.  It is not launched completely yet, but is being used and tested at many institutions.  It helps researchers discover other researchers.  It originated at Cornell and was made open source.  It was funded by a $12.5 million grant.  It is constituted of 120+ people at dozens of public and private institutions.  VIVO harvests data from verified sources like PubMed, Human Resources databases, organizational charts, and a grant data repository.  This data is stored as RDF and then made available as webpages.  VIVO will allow researchers to map colleagues, showcase credentials and skills, connect with researchers in their areas, simplify reporting tasks, and in the future will self-create CVs and incorporate external data sources and applications. So why involve libraries and librarians?  Libraries are neutral trusted entities, technology centers, and have a tradition of service and support.  Librarians know their organizations, can establish and maintain relationships with their clients, understand their users, and are willing to collaborate.  There is a VIVO Conference here in DC in August, where you can learn a ton more.

Nick then talked about why the semantic web was chosen for this project.  The local data flow in VIVO is relatively simple.  And a cool feature allows all 7 operational VIVOs connecting with each other, somewhat similar to a federated search technology.  Because the data is authoritative, they use URIs to track data about individual people within the system.

Paul then covered how VIVO ontology is structured.  The data in VIVO is stored using Resource Description Framework.  A sample semantic representation of the system’s data was displayed, connecting people who wrote articles together.  VIVO can create inferences for you as well.  Different ways of classifying data: Dublin Core, Event ontology, FOAF, Geopolitical classifications, SKOS, BIBO.  Several very complicated charts were displayed showing how different data in VIVO is connected.  So for modeling a person, you’re going to have the person’s research, teaching, services, and expertise in their data set.  Different localizations are required by different institutions.  He described how to create localization in VIVO, but gave the caveat that this functionality will not necessarily work across institutions.  He recommends a book entitled Semantic Web for the Working Ontologist .

Nick talked about the importance of authoritative data in VIVO, of preserving the quality of the data.  There are many different kinds of data: databases, CSV, XXML, XSLT, RDF, etc.  These all go through a loading process.  Load the desired ontologies.  Upload the data into VIVO.  Map the data to the ontology.  And finally go through data sanitation to fix the mistakes and inconsistencies.

Alex concluded the session by talking about the ins and outs of VIVO.  How do you work with VIVO data?  The easiest way is to crawl the RDF.  You can also utilize SPARQL queries.  The University of Florida doesn’t have a facility to create organization charts.  What they do have is in different types of inaccessible formats.  So they hand-curated the charts, and when Alex wrote the program to handle this there were 500 and now there are over 1000 people in the program.  The design includes a data crawl, serialization, formatting, and then exports into text, graph visualization, etc.  VIVO also has a WordPress plug-in that exports data into WordPress sites and blogs.  Cornell had a Drupal site, and a module for import of ViVO data was created.  They’re working on developer APIs to expose VIVO data as XML or JSON, to install a SPARQL file, etc.  He also created an application called Report Saver which lets you enter a SPARQL query, save it, and pull out data on a regular basis for analysis.

This session was presented by Emily Wheeler and Samara Omundson.

In 2009 the digital universe grew immensely.  If you picture a stack of DVDs reaching to the moon and back, that’s how much data growth we had.  By 2020 it is estimated that the 44 times the size it was in 2009.  We are drowning in data.  How can we make sense of it?  Apply structure to large quantities of data to help make sense of it.  Information professionals can lead the way through these piles of data, even if we’re not statistics junkies or graphic designers.  We know how to sift through vast quantities of data and pull out those few salient data points.  Data conveys a clear message, cuts through the chaos, and helps to engage and inform stakeholders.

One strategy for information visualization is using topic clusters.  As an example, they searched for “Bieber Fever” with a general search engine.  They displayed results in a hierarchical format using a single PowerPoint slide.  Another visualization was a branching choice – almost a spider web of information circulating out from a central point.  Another strategy is using time series visualizations.  These can be line graphics or bar graphs of a particular data point’s change over time.  You highlight an intersection in searches using Search Associations.  You can do this in spreadsheet tools, but they used a great tool called TouchGraph to create some really nice relational graphics.

How do you handle text analysis differently?  Keyword frequency is very useful to identify repeated keywords—a simple word count provides this data point. Creating a bar graph quickly with the number of mentions of various words can show the relative importance or permeation of various words and ideas.  You can use Tagzedo to create good keyword clouds.  By adding word association to simple keyword frequency you can see relationships between words and concepts.  Using different colors, sizes, and boldness of visualization elements can communicate the relative importance quickly.  Structural data, like keyword associations, focus on word order.  This helps you drill down into the context of a given word.  They used IBM’s ManyEyes tool to create a really nice looking structural chart.  Looking at social media data, Twitter and Facebook activity and followers, can tell a really compelling story about how social interaction and popularity relate to frequency of posting, where you post, etc.  They built a few visualizations in Adobe Illustrator.  Visuals tell a story, they show patterns.  They touched on Infographics quickly.  An infographic is a visual representation of information, but most are designed to tell a visual story of pretty complex data.  You see these in large media outlets in articles and in lead stories.  It is really easy to transform data – try it out!

Tips and tricks: Know your message, stay simple, and experiment with data visualizations whenever you can.