EMC has a very complex set of relationships with our partners. They have many different roles (reseller, distributor, service provider, etc.) and work with dozens of different products. Partners are segmented into programs in order to drive engagement; program membership (and with it, information) is gated based upon achievements such as successfully completing training as well as achieving revenue targets. More pointedly, some companies who are our partners in one space are competitors in another, meaning that content access needs to be carefully meted out, often at the level of individual users. These ambivalent relationships with individual partners are subject to the vicissitudes of the industry, and programs and campaigns targeting all our partners are characteristically updated on an annual basis.
Historically, we relied on an internally developed, technically robust solution to manage user access privileges. Unfortunately, it suffered from a lack of governance. Dozens of program managers, acting independently of one other, could command updates to entitlement attributes and their usage. Once created, attributes could not deleted, meaning that over time hundreds of outdated values accumulated in the system. Nor could they be edited, so attributes created to support user access management for a defunct programs were repurposed to support new ones, still bearing the defunct program names. Further, attributes were applied to content in clusters denoted only by four-digit numeric IDs. This meant that it was impossible to look at a user profile and a content metadata record and understand whether or not that user should have access to the asset. Predictably, all this resulted in the creation of a host of offline keys, usually managed as Excel documents on individual user desktops, fluctuating as programs and training modules were initiated or abolished. Inordinately complex and woefully inefficient.
Our goal in migrating partner content to a new platform was to avoid migrating the legacy user access management model with it, or recapitulating it in a new environment. In particular, we wanted to ensure parity and transparency between the application of entitlement attributes to user profiles and content metadata records. To achieve this, we needed to think of user profile records as a kind of content asset, one which could have metadata attributes applied by an external master data system in much the same way as multiple content repositories consume product tagging from our MDM platform.
We spent several months reviewing the legacy model, both through analysis of entitlement attribution and conversations with partner program managers. From this review, we were able to define a user access management model comprised of 45 individual attributes, representing a roughly 75 percent reduction in the total number of available attributes. These attributes were ordered into three taxonomy hierarchies in our MDM system; a careful structuring of the hierarchies as well as an inheritance property ensures that only a small number of attributes need to be applied to content to provide a broad level of user access. Integrations were set up with the MDM API to both the user profile database and the new platform’s content management system. Now, when a user attempts to access an asset on the new portal, a matching logic compares entitlement data being passed to the site by the user profile web services. If there is a match, the content is displayed; if not, the content remains hidden.
By treating user records as objects equivalent to downloadable files or web pages, we’ve been able to overcome the systemic divide between user profile management and content management which conceived independent models for each. Now when help desk personnel and content publishers talk to one another about user access issues, they’re speaking the same language. Not needing to translate between multiple models means that users get the assistance they need more efficiently. And of course, with centralized management of the user access model in the MDM repository, changes to the model can be propagated synchronously and almost instantaneously across all impacted systems. We think that’s pretty neat, and our employer thought it was so cool, they even applied to patent it.
In an earlier post I shared my presentation on taxonomy change management, which alludes to software utilities which can be used to track taxonomy change requests. Since I didn’t specify what those were, and since not all info pros are familiar with applications of this type, I thought I’d dig a little deeper into the topic.
Pity poor Excel. The true workhorse of the metadata management world, it nonetheless commands only a grudging respect from its user base. The idea of managing requests manually in an offline document justifiably makes people cringe. But if you’re working with no budget, a small team and a relatively small request volume, it can be used effectively for request tracking. At a past consulting gig of mine we used Excel not to track the specifics of each request, but as a tally showing how many terms were being generated in the tagging vocabulary we managed and which consumers were the source of those requests. Our client loved that Excel sheet, because she could easily see where the growth areas were and demand budget from top consumers to support our efforts.
And of course, you can toss the spreadsheet onto a file share, which gets around the problem of multiple versions of the document being marooned on people’s desktops. Or can go one step further.
Obviously, SharePoint is not the only solution of its kind, and there’s no point in deploying it or a comparable platform just to support request management. But if you’re already working in an environment where a SharePoint site is at your disposal, it’s worth taking advantage of the features that offers to support the request management process.
SharePoint, as has been widely observed, is structured as a series of lists. One such list is a project task list, which allows you to create what amounts to a radically simplified project plan. Regarding the change management process as a project of indefinite duration, you can successfully leverage a project task list to specify requests, assign priorities, identify resources, and track their progress.
Bear in mind that all of this tracking effort is still manual, so it requires the sustained engagement of everyone on the team to make it useful. However, it allows for dynamic updating of tasks by multiple distributed users, and avoids doing so within the context of a document, which can be cumbersome.
Issue tracking software
Working in high tech, it’s only natural to repurpose solutions from the software development world to support taxonomy management. In the past several years, I’ve worked with several different issue tracking platforms in a number of capacities, both commercially available (IBM Rational ClearQuest, HP QualityCenter, Siebel — now Oracle — CRM) and internally developed (Microsoft Product Studio, available externally as Visual Studio Team System).
In addition to sharing the advantages of the project task list, these applications allow for considerably more detail regarding issues. Keep in mind however, that the level of detail — or the names of the data entry fields — may not have direct application to issues of the type faced by taxonomy managers (e.g., enumerating steps to reproduce an issue, or assigning an issue to a scheduled software build). Issue trackers also offer greater flexibility in assigning issues for work, including allowing any user to pick up and self-assign an open issue.
Customer relationship management (CRM) applications allow the additional advantage of permitting end users to open issues themselves (usually via email to a given alias) and for managers to correspond directly with the affected users in the context of the application.
Many of these are enterprise-level tools, and require a significant investment in licensing and configuration. Similar to SharePoint, they may be worth leveraging if your employer already has them under license, but are otherwise likely out of reach. Two notable exceptions are JIRA and Intervals, each of which is offered as a hosted, cloud-based solutions for a very reasonable licensing fee.
This blog is principally about metadata management. But I work in high tech, and as someone who does I’m occasionally allowed to nerd out about developments in the industry. Particularly when they relate to Microsoft, where I worked as a contractor doing taxonomy management and user experience design for a couple of years. And even more particularly when someone makes assertions about Microsoft as bold as this one by Michael Mace or this one by Wolfgang Gruener.
Before I begin, I should note that my work at Microsoft turned me into something of a fanboy. I bragged on Facebook about my trip to the brick-and-mortar Microsoft Store in Santa Clara, not that any of my friends cared. I own both a Zune HD and a Windows Phone, and I will probably buy a Surface in short order when they debut in a few weeks. All of these devices share the Metro user interface; I like using it, and I find my friends and coworkers like it, too, when they ogle my phone, though they swear they would never consider buying one. I’m eager to remind them that, with the upcoming release of Windows 8, they’ll eventually be using Metro whether they mean to or not.
These commentators seem to think that Windows 8 will be a wedge which drives desktop users to other platforms, precisely because its user interface was not designed for desktop computing. A big part of this argument rests on the notion that the shift to Metro is as disjunctive as the shift from MS-DOS to Windows was. I don’t know if I agree — GUI to GUI does not seem as profound to me as command-line to GUI — but it’s tough to quantify.
I’d argue that Windows 95 was in many ways a big disjunct from Windows 4.0/NT (notably through the introduction of the “Start” button). Customers griped about that shift but still bought it. That’s largely because of the extent of Microsoft’s market penetration in desktop OS, which hasn’t shifted all that much since 95 rolled out. I’m not convinced it will drop because Metro leaves a bad taste is some folks’ mouths initially.
Indeed, I think if anything the hump is likely more easily gotten over now than it was fifteen to twenty years ago. Microsoft’s user base are growing more and more inured to these kinds of disruptive interface changes due to the growth of the web and web-based applications and are adapting to them more quickly. Everyone whines about Facebook every time they roll out a UI change, but Facebook still has 850 million users. Tech bloggers didn’t use Timeline as an occasion to speculate on Facebook’s demise. And Microsoft, like Facebook, is focused on pushing quality product with the intention of leading users, not coddling them.
All that said: I agree that Metro may not readily translate to a desktop environment, even having enjoyed using it on my Windows Phone for over a year. Clearly part of the play with Windows 8 — perhaps the biggest part — is anticipating ongoing diminution in the PC market, but PCs are inevitably going to be with us for some time yet.
If Metro doesn’t play nice with that category of devices, or the legacy business applications run on them, or their older and less flexibly disposed user base, then Windows 7 may be with us in the desktop world for a long time to come. But then again: Windows 7 has still not supplanted XP in many markets (including high-tech customers like my employer). That’s not something that Microsoft’s thrilled about, of course, but nonetheless those folks aren’t migrating to other platforms. They still use Internet Explorer. They still use Office.
Bottom line: Windows 8 with Metro is still a super gutsy and super risky move for Microsoft. If it doesn’t pan out it will hurt them, though I’m not sure how much, or who will benefit from their pain. And lastly, I will snarkily observe that when Microsoft’s competitors take comparable risks they are lauded as visionary, while Microsoft is taken to task for being incautious. Bias much?
I once had a professor who loved Post-Its. Instead of scribbling notes in the margins of her books, she would jot them on Post-Its and stick those in the margins. Her office was filled floor to ceiling with stacks of books, each thickly leaved with little yellow quadrangles. At the end of the semester, we thanked her by giving her a box full of Post-Its in every shape, size, and color we could find. She was beside herself.
Later on, I went to library school and found out that Post-Its do terrible things to books. But Post-Its still have their uses. Take card sorting, for example. Card sorting is a research technique used by information professionals (among others — my first experience with a card sort was in an anthropology class)to explore how people group items in order to develop structures which maximize the probability of users being able to find those items. While it’s not often used to guide taxonomy development per se, it can afford insights into optimizing structured information being used for site navigation.
Recently at work, we conducted a card sort on a subset of product values used for navigation, a research activity we’d not conducted previously. Currently, this navigation shows approximately 250 products in an alphabetically ordered, otherwise undifferentiated list. Product management has not introduced any categorization scheme for these products, so we wondered if one could be derived from the responses of expert users.
We took what we believed to be a representative subset of the list (about 90 items) and plugged it into OptimalSort, a web-based card sorting application. (No Post-Its required!) We then recruited a set of about 30 users from the online support forums for this product line and offered them a $25 Amazon gift card if they would complete our survey. Based on our analysis of the raw data collected, we determined that there was warrant for grouping these products into seven functional categories.
Here’s part of a presentation we shared with SLA’s New England chapter on this topic:
Some thoughts on what we learned from this exercise:
It takes time. It took us a week or two to prepare the list and set up our OptimalSort instance, a month of persistent effort to recruit a significant number of participants, and another month or so to take the firehose of data from OptimalSort and crunch it into something actionable and easy to understand. There are people and agencies who are able to do recruiting for you, but likely not for as specific a user base as we wanted to target. Though it’s fair to say we might have saved ourselves some time and some noise by surveying a smaller number of users.
Ninety items is a lot. The greater the number of items to be sorted, the more time it takes for users to do the sort, and the higher the likelihood that they will not complete the exercise. We actually had to change our method mid-stream by configuring OptimalSort to require users to sort all items before finishing. Even then, we had users engaging in categorization that was not exactly mindful in order to qualify for the almighty gift card. (Like the person who grouped everything into two categories, “Known Objects” and “Unknown Objects”. Heh.) Moreover, the more items there are to be sorted, the more effort is required to identify patterns in the sort data.
Even a modest effort can yield meaty results. This survey was meant to be exploratory, testing a new application and research method to see what value they had to offer. We applied them in the service of a usability problem which we knew was bothersome but, given that we are not product specialists, we had little direct ability to solve. However, our conclusions were of great interest to product management, who it turned out had been searching for a viable categorization scheme for a long while. We’re now working with them on implementing the results to improve product-based navigation across multiple web properties.
Last month I paid a whirlwind visit to Chicago — swam in Lake Michigan, toured the Roy Lichtenstein retrospective at the Art Institute, had a couple pints of Bell’s Two-Hearted Ale with a friend I hadn’t seen in sixteen years, and presented at a session on taxonomy change management at SLA’s 2012 Annual Meeting.
This presentation is actually one I’ve wanted to do since I was still in my taxonomy salad days. So much of professional development in the field is focused on the run-up to taxonomy initiatives — choosing platforms, tools, and vendors — as well as the theoretical underpinnings of structured information. While this is all very nice, my old team and I were perpetually frustrated by sitting through webinars which didn’t reflect or enhance our own day-to-day experience of maintaining and managing taxonomies already in flight. Our team lead challenged us to stop bitching and do our own webinar, so I finally got around to that. And to think it only took four years.
Thanks to my copresenter, Fran Alexander, whose case study of managing controlled vocabularies for the BBC Archives was mind-blowing by virtue of the extent of the work she and her team have taken on. Thanks, too, to all the taxonomy peeps at SLA who chatted up and live-tweeted our session. Despite being held at 8 am on the last day of the conference, the program got good advance buzz and we had a full house. Clearly, people care about change management!
If you missed the session, fear not — you may have another chance. Plans are in the works to present an abbreviated session to groups in both New York and Boston this fall, so stay tuned!
It’s hard to criticize global metadata standards without sounding like a jerk. They mean that we don’t have to reinvent the wheel every time we find ourselves in need of a new schema or vocabulary. They allow for portability of data and interoperability of systems. Someday, they might just transform the interwebs into a completely synchronized knowledge repository in which no translation between domains and locales is required. (Yeah, that last one seems like kind of a long shot these days.)
All great things, to be sure. But think back: can you remember the last time you used a standard? In certain areas, to be fair, odds are good that you’ve rubbed shoulders with a global standard recently (libraries, research laboratories, the public sector). But if you’re in industry, you may be managing proprietary metadata (a “standard” of a sort) in back-end legacy systems which aren’t indexed by agents outside the corporate firewall. Or you’re trying to create a novel experience for your customers, perhaps allowing their consensus to shape a “standard”.
So “standards” are unarguably useful, but they are often highly particular. There’s little incentive for a company to make information about its customer base portable if that information isn’t going to be shared. And while aligning idiosyncratic internal systems to methodical global standards may be the “correct” thing to do, it can be a woefully costly effort, expending time and money perhaps better spent on other projects, as well as expending the good will of metadata consumers who don’t take kindly to change for change’s sake.
Even the most straightforward standardization can wreak havoc with internal systems. Geography seems like it should be beyond dispute, except that geographies are subject to political disputation all the time. More urgent than that: the enterprise’s view of the world is typically oriented to the market in preference to politics. Because different industries and corporate entities exploit different markets, they are prone to each carving up the world differently. Sometimes different divisions within the same company will deploy different geographies. Recently at work I came across an example of two different geographies pertaining to support partners: one describing the partner companies, the other describing the employees of those companies.
This is messy stuff, and the fantasy of wishing upon a global standard and making it all go away is an alluring one. But counter-standards will continue to exist despite our best efforts at rationalization, because they serve a purpose. The best thing we can do is make an effort to understand the underlying intent, then make recommendations for improving the experience of managing and consuming metadata which continue to support it.
I was fortunate to attend Taxonomy Boot Camp for the first time this past November, but unfortunately missed one of the more unusual programs of the session: Ahren Lehnert‘s presentation on Agile for Managing Taxonomy Projects.
For folks who aren’t familiar with agile, it’s a suite of methodologies typically used for managing software development projects. These methods share some common notions about what it takes to build effective working software which upend a number of assumptions guiding older standards for project management.
Lehnert shows how agile approaches can be of benefit to taxonomy management projects. He notes that they share a lot in common with software development work, notably a need for continuous improvement and an evolving understanding of user requirements over time. His presentation keys into a couple things I’m passionate about: agile methods and robust taxonomy change management practices. (Hey, we can’t always choose our passions.)
Having spent a few years practicing taxonomy development and management projects in agile environments (not to mention being a member of an agile development team tasked with building a taxonomy management application), I will add that it often helps to be on the same wavelength that your developers are on. When developers and business stakeholders haven’t defined in advance how metadata values will be consumed by applications and displayed on web pages, or even which values are supposed to be available for selection, it does a taxonomist no good to be identifying standards and defining hierarchies. Though it can sometimes be frustrating, taxonomy development needs to keep pace with the broader development path and the discoveries made along the way.
Once a project is done, agile methods can provide a foundation for curating metadata going forward. Too often taxonomies are static products delivered along with code, with no plan, resources, or tools in place for ongoing development. Agile is one way of approaching change management strategically and incrementally, following the same iterative processes used to deliver other quality user experiences.