User Research Meets Taxonomy: A Card Sorting Case Study
I once had a professor who loved Post-Its. Instead of scribbling notes in the margins of her books, she would jot them on Post-Its and stick those in the margins. Her office was filled floor to ceiling with stacks of books, each thickly leaved with little yellow quadrangles. At the end of the semester, we thanked her by giving her a box full of Post-Its in every shape, size, and color we could find. She was beside herself.
Later on, I went to library school and found out that Post-Its do terrible things to books. But Post-Its still have their uses. Take card sorting, for example. Card sorting is a research technique used by information professionals (among others — my first experience with a card sort was in an anthropology class)to explore how people group items in order to develop structures which maximize the probability of users being able to find those items. While it’s not often used to guide taxonomy development per se, it can afford insights into optimizing structured information being used for site navigation.
Recently at work, we conducted a card sort on a subset of product values used for navigation, a research activity we’d not conducted previously. Currently, this navigation shows approximately 250 products in an alphabetically ordered, otherwise undifferentiated list. Product management has not introduced any categorization scheme for these products, so we wondered if one could be derived from the responses of expert users.
We took what we believed to be a representative subset of the list (about 90 items) and plugged it into OptimalSort, a web-based card sorting application. (No Post-Its required!) We then recruited a set of about 30 users from the online support forums for this product line and offered them a $25 Amazon gift card if they would complete our survey. Based on our analysis of the raw data collected, we determined that there was warrant for grouping these products into seven functional categories.
Here’s part of a presentation we shared with SLA’s New England chapter on this topic:
Some thoughts on what we learned from this exercise:
It takes time. It took us a week or two to prepare the list and set up our OptimalSort instance, a month of persistent effort to recruit a significant number of participants, and another month or so to take the firehose of data from OptimalSort and crunch it into something actionable and easy to understand. There are people and agencies who are able to do recruiting for you, but likely not for as specific a user base as we wanted to target. Though it’s fair to say we might have saved ourselves some time and some noise by surveying a smaller number of users.
Ninety items is a lot. The greater the number of items to be sorted, the more time it takes for users to do the sort, and the higher the likelihood that they will not complete the exercise. We actually had to change our method mid-stream by configuring OptimalSort to require users to sort all items before finishing. Even then, we had users engaging in categorization that was not exactly mindful in order to qualify for the almighty gift card. (Like the person who grouped everything into two categories, “Known Objects” and “Unknown Objects”. Heh.) Moreover, the more items there are to be sorted, the more effort is required to identify patterns in the sort data.
Even a modest effort can yield meaty results. This survey was meant to be exploratory, testing a new application and research method to see what value they had to offer. We applied them in the service of a usability problem which we knew was bothersome but, given that we are not product specialists, we had little direct ability to solve. However, our conclusions were of great interest to product management, who it turned out had been searching for a viable categorization scheme for a long while. We’re now working with them on implementing the results to improve product-based navigation across multiple web properties.