Detzi (2012) wrote that expectations about content audits “tend to feed anxieties and overwhelm, and are often why the prospect of completing content audits are too quickly dismissed.” I certainly found the most difficult part of the content audit was the prospect of doing it. Once begin it seemed like a task with a fairly clear progression.
I partially audited the website of the Johnson Graduate School of Management library. I used to work at this library fresh out of college, *mumblemumble* years ago. The library has changed a lot. Almost all of the physical collection was merged into one business-oriented main library serving the business school, the hotel management school, and the industrial and labor relations school. The Johnson library itself mostly houses course reserves and computers with access to the crucial business databases, with one librarian (there were three plus paraprofessionals when I worked there).
(should link to the Google Sheet, please let me know if this does not work)
The most time consuming task was deciding on the order of pages to list and how to apply Page IDs. Since it is helpful to have a sense of the architecture to do this, it can be a little confusing when there are multiple ways of accessing a page. Deciding on how to map them out was sometimes challenging, but also interesting because it was challenging. This might have been more challenging, to the point of being frustrating, if I really had to inventory the whole site and figure out where everything ought to go, but the limited nature of this exercise spared me from “tilting at the windmill of comprehensiveness” (Rosenfeld, 2006). Once I had inventoried the site and decided how to arrange the spreadsheet and PageIDs, filling out the spreadsheet was pretty straightforward. It wasn’t drudgery although maybe after 100 pages it would have become so! In a kind of OCD way I liked filling out my spreadsheet.
One source of confusion when I was creating the list was caused by the fact that the website is part of a larger system, and must connect to the broader university library system or other university websites (such as IT or email services). I tried to exclude pages that linked outside the management library site, but included a few external pages that are probably highly important to Johnson members. Some links that appear to be clearly business school material connect the user (without warning) to the central library site, thus changing the look and navigation significantly. In a real audit, website analytics would be important to determine if these resources are used by Johnson members enough that the sites should be moved to the Johnson Library site or at least integrated more smoothly.
I was confused by the choice to place the most popular resources on the home page with tabs to link to them, and without giving them distinctive URLs (that is, you click a tab to access “Databases” or “Books and Journals” but the URL does not change). From a content audit architecture perspective, this is a bit of a pain. Users probably will not care, however. It makes sense from the user’s perspective–when I worked there business information databases and course reserves were far and away the most used resource, and if the site is anything to go by that is still true now. I noticed that the best pages on the Johnson site (best written and most useful) were the databases and course reserves. The worst pages were, for the most part, ones that probably see much less traffic.
The audit was interesting–my overall impression was that the site was well designed based on UX principles. However as I audited the site I found too many paragraphs, some pages that were poorly written but quite useful, and one page that was really well written but with pretty much no information whatsoever (the “Workshops” page which pretty much just said yes, we have workshops, email us to find out more.)
I discovered a lot more about the site than I would have done just by looking at the pages in a less systematic way. Content audits seem like a valuable way to uncover flaws–particularly flaws that users themselves cannot help you identify. That is, if a database connection just doesn’t work, users will probably complain. That is a straightforward problem and you will probably hear about it. But if changing navigation or text inconsistencies misdirect users, they are likely to feel vaguely confused but less likely to be able to say why they feel that way or point to the problem. This was a really valuable experience!
Detzi, C. (March 20, 2012). From content Audit to design insight: How a content audit facilitates decision-making and influences design strategy [Web log]. UX Magazine. Retrieved from http://uxmag.com/articles/from-content-audit-to-design-insight
Rosenfeld, L. (Jun 16, 2006).The rolling content inventory [Web log]. Louis Rosenfeld.com. Retrieved from http://www.louisrosenfeld.com/home/bloug_archive/000448.html