Web Support Blog

October 11, 2006

Content Effectiveness and Functionalism: Update and Misc.

Filed under: Content Effectiveness, General, Web Analytics — Chris @ 4:43 am

Since I posted my whitepaper on Content Effectiveness and Functionalism, I have been working through the next steps: applying. In doing so, I discovered that I may not have always applied the right terms in some cases. The good news is, with Functionalism, the terms are not the important part, it is the methodology. So I have updated the paper with a slight variation to a few of the terms, but the KPIs did not change. If you go to the original post, you will get the update version, or you can get it here.

As I said, they key to Functionalism is the methodology, not the terms. In evaluating your site, whether you are focusing on content effectiveness like me, or another aspect, consider the purpose of each page in your site. A common method to use is a funnel and a fallout report. When you examine your fallout report, what actions do you take? Well if you used Functionalism, you would have KPIs for the page where you lost your users, and therefore you would have some idea as to where the page was failing.


On a completely different subject, this weekend I posted on my other blog two keyboard shortcut guides. There is one guide for Windows XP Keyboard Shortcuts and the other is for Firefox Keyboard Shortcuts. The guides can be printed double-sided and folded in three (tri-fold), and they make a handy desk reference until you learn the shortcuts. Follow the links above to get your own copy or go to my blog Skimming the Cream Off the Top.


October 5, 2006

Assumption: The Content Exit Page

Filed under: Content Effectiveness, Web Analytics — Chris @ 4:55 pm

I made some assumptions in my whitepaper on Content Effectivenss and Functionalism, so I want to discuss them over the next few weeks. The first assumption is that most users of your site will exit after reading (or viewing) the information they need, or when they give up. This assumption means that the best way to validate whether your content was effective or not, is to look at what the customer did after reading it. This is a model I have followed for years, but was recently supported by Elliott Masie.

As Mr. Masie pointed out in his discussion of Fingertip Knowledge (see my post: Fingertip Knowledge: Learning-on-Demand), people are more and learning as they need it. So, they are not reading ahead and learning what they may need later, rather they are searching for an answer once they run into an issue. Likewise, once they find the answer, they are going back to their original task. Take for example the case of a user who sent a document to their laser printer, and it jams. The user was not trying to learn all about the printer and specifically jams, the user was working on a document and wanted a printed version. Therefore the user will go to a support site, try and learn how to correct the jam problem, and then return to their real work.

You can apply this same concept to other support situations too. Take the example of the user who is writing code and he or she runs into a problem — perhaps a syntax issue. They go to Google, enter related terms, filter through the results. Once the user finds the answer, he or she does not continue this process, they go about their original task — writing code. So, from the perspective of the support site, they see the last transaction with the user is reading a piece of content.

If you are still with me, it is easy to then apply Functionalism to measure how effective an existing piece of content is (our Explainer/Converter page). Divide your Exit Rate by the number of times the content has been viewed and you will get a percentage (result x 100). The higher the percentage, the more effective the content. For example, if you have 1000 views and 10 exits then the rating is 1%; likewise, 1000 visits with 100 exits is 10%. This is an easy way to identify valuable content. Remember to also use the Exit Propensity concept: look across all your content and identify your worst offenders. I would encourage you to consider some weighting too — i.e. apply the formula to your most frequently viewed content instead of all your content.

Back to our formula (Exit Rate / Page Views * 100) If the percentage is low, then you need to look at other measures. First, consider the causes. Likely if users are viewing the content, the title and description was compelling enough for the user to click-through. So I would consider these possible causes:

  1. Mismatch between title and description and the actual content
  2. Content is outdated
  3. Content is incomplete
  4. Content does not written at the users’ level or too complex

All of these symptoms are difficult to diagnose without having an expert evaluate it — which can be expensive. Therefore if you have a lot of this, it may payoff to look at your creation process. (We will save this discussion for another time.) With that being said, there may be some cluses to look for. If users refine searches and look at other content on your site after viewing this content, it may be a sign of mismatch or incomplete content. If users have a tendency to spend a long time with the content, then it is likely at the wrong level or too complex. Through process of elimination, you should conclude that other content likely fits the outdated category.

Enough for today… I think this gives a lot to consider how it could fit into your organization. Again, I will continue to address the assumptions in the whitepaper over the next few weeks. Later, I will dig deeper into the analysis and diagnostic issues, so we can make corrections based on what the data is telling us.

October 1, 2006

Content Effectiveness and Functionalism

Filed under: Content Effectiveness, Web Analytics — Chris @ 9:07 pm

My entire career has been in customer support, with the last 11 specifically in the web support space. Naturally everyone has asked me, “How do you identify a web site fix as an equivalent to a phone fix?” When I tell them, “Unless the user tells you that your site fixed their problem, you really cannot tell,” the next question is, “Then how do you know the site is working?” Well, it takes some assumptions: (1) if your content is right (timely, accurate, and relevant), then your users are solving their problems. And, (2) customers will leave when they have solved their problem or have gotten tired of looking for the solution. If you expand on assumption (2), by watching where customers exit on your site, the number of searches made, and content pages viewed, you can get a really good picture of what is working and what is not.

With these assumptions in mind, I set out to try and identify the content effectiveness on my site – as that is what drives customer success. In this quest, I looked for others who are already doing this, others who have a different view, or others who have a solution for identifying a self-support success. It seems the web analytics vendors have yet to work in this space and the call center organizations are still stuck on trying to find the phone fix equivalent. In other words, I have not been successful in my quest.

Recently though I found a whitepaper, Functionalism: A New Approach to Web Analytics on Gary Angel’s blog, SEMAngel. Gary is the President of SEMphonic, a company that has over 10 years of experience in the analytics space. To make a long story short, I was able to take the methodology from the whitepaper and apply it to content effectiveness. The whitepaper I wrote, Content Effectiveness and Functionalism, discusses how you can use Functionalism to identify the health of your content, and where to focus your efforts to improve your content (its effectiveness).

Below is a short list of basic assumptions I made in writing the paper.

1. Your support set is segmented by product. (In addition, it can also be segmented by language and region/localized.)
2. Your primary support site goal is to provide self-support.
3. Majority of your users use the Fingertip Knowledge approach. (Most users use your site to solve an immediate issue or problem, not for proactive learning.)

In future posts I will discuss the methodology in further detail, answering questions, and drilling-down to the next level of the methodology.

September 22, 2006

Do You Have a Collaborative Business Environment?

Filed under: Knowledge Management — Chris @ 7:57 pm

I finally got around to reading the September 2006 issue of KM World. Jonathan B. Spira wrote an article, Step up to the knowledge economy, where he made a very valuable point. He lists three tenets that are required to make collaborative business environment (CBE). These three items range very true to my own (support) environment.

  1. the one environment rule (OER), which describes the benefits of conflating all applications into a single interface;
  2. friction-free knowledge sharing, which eliminates unnecessary steps in order to increase knowledge worker productivity; and
  3. embedded community, which deeply integrates many of the tools within the work environment.

In terms of knoweldge development, we currently are not in one environment — for that matter, we had three separate processes depending on the medium until just recently. Within a year, we should be on one environment. The greatest benefit is that we will reduce the learning curve for our knowledge contributors. For knowldge retrieval, it has just been this month where we have one search interface across all of our knowledge.

With the improvements to the environment, we are also gaining process benefits, eliminating unnecessary steps. By the end of the year, the publishing aspects that are still manual will have one less person to filter through. And again, once we get to one system, we will see additional improvements. With that being said, as we work to make sure that we are investing our precious resources on creating the right content, I do fear that we introduce new, unnecessary process steps. From prior experience, any process needs to be watched closely — too little, and mistakes happen; too much, and nothing happens.

The third tenet seems rather obvious, but it does not play out that way. Knowledge tools need to be in the workflow. If you have a call center, the knowledge tools need to be integrated with the transaction system. And if you are expecting those agents to add knowledge, the creation aspect needs to be integrated into the workflow too. The same can be said for your business intelligence tools. If the tools are not consolidated, easy to access, and within a familiar environment, they unfortunately will not be used.

I beleive the underlying message here is, if you want great knowledge collaboration to occur in your environment, you need to put the pieces in place to make it easy.

Fingertip Knowledge: Learning-on-Demand

Filed under: Learning, Search, Web Analytics — Chris @ 5:48 am

I recently listened to a podcast from Elliot Masie, Fingertip Knowledge: Learning in a “Flatter” World. Mr. Masie introduces a concept called fingertip knowledge. He is using it in context of learning, but it is a concept I recognize — I have called it learning-on-demand. It is the idea that I wont try and learn all that I need to know, I will use Google to find what I need, when I need it.

This also translates to our support web sites. As I have indicated in my prior blogs, users will come to your site to learn a specific piece of information to solve a problem. Your users are employing the concept of fingertip knowledge more and more every day.

It is this idea as to why I suggest that the best way to determine success on your web support site is to look for the users who left your site immediately after reading a content (knowledge) item. Once you have fulfilled their learning-on-demand need, they will return to their work. Remember, as a provider of support information, customers come to your site because your product or service failed — once solved, they will go back to using your product to complete their real work.

September 21, 2006

Measuring Web Support Success, part 2

Filed under: Web Analytics — Chris @ 7:59 pm

As I mentioned prior, you need to think about the delivery of your content as much as the content itself. Moving from that rather basic notion, consider “What is the purpose of your customers’ visit?” and “Why are they coming to your site?” The likely answer is that they had a problem, and they need your help to solve it.

Now some customers will know or believe they know the answer — they just need the latest driver or patch, so their behavior may look a little different, but it is still the same intention — solve their problem. Excluding customers that come to your support site to learn (which will vary by product and industry), the ideal support experience is straight forward and therefore easy to measure success.

If I was selling a product or service, once I collected the customer’s credit card number, I would count that as a success. In the support world, once your customer reads the answer to his or her problem, you have been successful — a conversion. If you buy into that idea, then this should be fairly straight forward: the last content item read on your site by your customer must have been the solution to solve his or her problem. Why else would they keep looking for an answer if I already found it (that could be the learning exception)?

So assuming you have a low rate of customers that visit your site purely to browse and learn, you can conclude that the last item viewed was the conversion content item. Okay, but what if the customer never found a solution, but looked at many content items? Again, what wss the last item viewed on your site? A content item? Or was it a search result page? Or how about a page to route to additional content? If the last item viewed (the last page viewed) on your site was a content item, then I would say very likely your customer was successful in solving their problem. If the customer left your site where the last page was any other page but a content page, then I would say they did not solve their problem.

What do you think? I will continue to expand on this subject in future blogs…

September 18, 2006

Measuring Web Support Success

Filed under: Web Analytics — Chris @ 4:13 am

How does your company know if their web support is successful? Does more visitors mean it is better? Can you correlate increased web success to a reduction of support calls to your call center? Most companies do not know if their web support is successful nor can they correlate it to a reduction in support calls.

I have recently started a new assignment where I will define our organizations content effectiveness strategy. The basic idea is that better content = more successful customers. When I am done, we should be able to identify our content health and which part of the content needs the most attention.

There are two points worth expanding on from the prior paragraph: content health and which part of the content needs the most attention. First of all, content health should put information in the hands of managers that need to decide how to best allocate prescious resources. Today, most organizations only know to answer the phone — it is the only area where they can measure the value. These managers need more information so they understand when it is better to invest in content and when it is okay not to.

Hopefully, the second part is obvious. The fact is, there is more to support content than just writing a few articles or FAQs. Just remember this for now — no matter how good the content is, if users cannot find it, then it is useless.

« Newer Posts

Blog at WordPress.com.