One of the common refrains in discussions of the direction of the library's website is, "Send it to the Usability Committee!" Don't know what to call a subsection of the site? Send it to the Usability Committee! Not sure where a new piece of content should fit into the site structure? Send it to the Usability Committee! Is it worth keeping the "Services for Undergraduates" page? Ask the Usability Committee! But it's important to realize that usability, while an important part of the design process and an important tool for making informed decisions, is not the only way that many -- or even ANY -- decisions can or should be made.
I'm not suggesting that these are NOT things that involve usability (which I'm using as a broader term meaning "web assessment," rather than just usability testing). Certainly card sorting can help us decide what to name a section, and log analysis can help us decide whether or not a page is worth keeping. And standard usability testing results can inform either decision. But NO assessment technique can replace common sense and an informed viewpoint when making these decisions. In each case, usability can help; it cannot be the beginning and the end of the decision-making process.
Where does usability fit into the process
Sometimes usability is a starting point -- a place where ideas begin. In testing the Media section of the website, it quickly became clear that the library needs a top level page for "Computing" that brings together all of our disparate content on computers and computing.
More often though, usability is a compass, telling us if we're headed in the right direction. Case in point: the tabbed search box on the home page. This little gem was very successful in helping students find what they needed in the first round of usability tests. Yet the idea for the tabbed box itself would never have come from usability testing. We may have seen a problem ("Students have trouble knowing which page to visit to search for journal articles"), but that particular solution would never have suggested itself.
Asking the right question
Usability is GREAT at looking at an existing feature and deciding its value, whether positive or negative -- answering the question: "Does this item work?" Usability is far less good at answering more open-ended questions: "What should we put here?"
Even when those questions are slightly narrowed, usability may not be appropriate. It seems perfectly reasonable, for example, to expect that usability testing would answer the question: "What do students want this item to be called?" Yet we've seen that that is often not a question worth asking, let alone wasting resources trying to answer. For example, in the open card sort 3 years ago, at least one student grouped all microform-related content into a single category called "Micro." But if we put a "Micro" link on the home page, can you imagine anyone clicking on it? Results get marginally better when looking at similarities between the suggestions of multiple students, but still these can be misleading. This doesn't mean that we shouldn't do open card sorting!! In fact, I would consider it a vital part of the design process. But taking the results as gospel without further testing would be a serious mistake, and almost certainly create more problems than solutions.
In the above case, the question we should be asking isn't "What do students want this item to be called?" But rather, "What name can we give this section that will have meaning for students, and that will help them find it when they need to?" The two questions seem on the surface to ask the same thing, but really they don't. The former suggests that we can ask the students what to call something, and they will tell us. The latter distances the students' perceived desire from the equation, and tests the term rather than the student. It looks at what users DO rather than what they SAY. From a usability standpoint, this is vital. Nielsen pointed this out in 2001 with his article "First Rule of Usability? Don't Listen to Users." His example was that "50% of survey respondents claim they would buy more from e-commerce sites that offer 3D product views." Amazon is the most successful web retailer of all time, yet they have no 3D product views. Are they missing out on a huge selling opportunity? Or are users more inclined to SAY they like something that sounds cool, even if it would make no difference in -- or even have a detrimental effect on -- their actual behavior?
A great illustration of this came out of the libraries' first round of usability tests in 2008. One of the post-test questions was "What would you change about the website?" One test subject immediately responded, "Honestly, I wouldn't change a thing." Sounds great, right? A ringing endorsement! We can all stop trying to make improvements -- the website is perfect! The catch is this: This student had the lowest completion rate on the tasks of any of the test subjects, and the highest average times in the tasks that he was able to complete. So he told us that the site was perfect, but showed us that it was littered with usability problems.
Summary: So what ARE the questions?
So at what point should we be saying, "Let's send this to the Usability group"? When we need to answer one or more of these questions:
- Does this site/page/functionality work?
- What parts of this site/page/functionality are useful for users? In what way(s) are they useful?
- What parts of this site/page/functionality create problems for users, and what are the problems?
- Where in the hierarchy will students look for this content?
- Did the change we made improve user experience?
- Did the change we made introduce any unforeseen negatives?
- Does the terminology we're using make sense?
This is just a temporary list off the top of my head -- no doubt there are others that will be added as time goes on. But note that the answers to all these questions have something in common: they can all be measured in some meaningful way, whether by time spent on task, click-throughs, task completion rate, etc. None look for a starting point; rather, they evaluate what is already there, even if what's "there" is only an idea. And that's pretty much what usability does.