Web Support Blog

October 30, 2006

Components of Enterprise Search

Filed under: Search — Chris @ 4:03 pm

Do you ever wonder what the parts of a search engine are? Or, how would you evaluate search engines across enterprise search vendors? Well, if you visit the InQuira Resources (http://www.inquira.com/resources_reports.asp) you can order a whitepaper, “InQuira Self-Service and Support Search Solution – Advanced Linguistics, Dynamic Navigation and Classification” that can help you. The paper is an analysis done by the Patricia Seybold Group specifically about InQuira’s search solution, but it has information that can help you no matter who the vendor is.

The Patricia Seybold Group has developed a six category evaluation matrix that ranges from retrieval and results management to architecture to company viability. The whitepaper reviews each of these categories, many of which have up to five sub-categories.

In addition, the paper describes a search effectiveness ladder. The idea is that the higher the engine is on the ladder, the better it is. Keep in mind that the assumption is that natural language search is a good thing. Most of us are very familiar with Google, which is keyword search, and its algorithms are much different than an enterprise search tool. So assuming you buy into natural language search and you are willing to training your users on how to use it, here is the ladder:

  1. Keyword or text search: The more the word is mentioned in the content, the more relevant the content is scored.
  2. Natural language processing (NLP): Recognizes grammar, concepts, and relationships between words.
  3. Add synonyms to keyword and text searches (e.g smudge = smear)
  4. Advanced NLP: Introduces new relationships between words such as “contains,” “is part of,” and “occurs with.” Also identifies parts of speech and mines, classifies, and matches concepts.
  5. Intent: Classifies a question to make it actionable. The intent category is linked to language rules, ontology, and user experience categories.

So whether you are looking at a natural language search engine or a keyword search engine, I think this document is a good reference to use when analyzing all the components requiring consideration when purchasing a search solution. Of course if you already have a search solution, most of these categorize still work to make sure you have put the effort into each area.


October 15, 2006

When to Focus on Findability (Instead of Content)

Filed under: General, Search — Chris @ 12:06 pm

Recently I heard someone say that content is king – and I agree. But he said content is king, and that is where you should focus the majority of your attention on your site. To take that at face value would be a mistake. If you do not have any content on your site, of course you need to establish a process for creating and publishing content – a topic all on its own. The point I want to make today is that no matter how much content you have, if you cannot find your content, you might as well have not invested the resource to create it.

Take this a step further – you can focus your resources to develop high quality content, which can be expensive, and yet if your customers are not finding the content, you will have invested precious resources on developing content that no one is using. Consider your success rate for users ability to find your content – if it is 50% of the time, then one of every two content items will be found.

Here is how I look at this. Of all your content, how many have been viewed within the last month (or quarter). Do not confuse this with your success rate – if you had 100 content items, with 50% success, yet only 20 items were viewed, you potentially have an 80% findability (search or browse) failure. Of course since you only missed 50% of the time, your problem is no more than 50% — the rest is unnecessary content.

I will break this down further. Begin with this question: how many of the failed 50% could have been solved with the 80% that were not viewed? Assuming another 30 content items could have solved problems, then you have a 60% failure (and a 50% of unnecessary content). Therefore only 40% of your content is being found.


Realistically you probably do not have all the answers for all of your customer needs. So assuming you could solve another 25% of your customer queries if your customers could find the content; and assume it took another 10 content items, then you would only have a 30% failure (10/30), with 70% of your content being waste.


So we can see that putting the majority of our focus on content is not always the right approach. Instead, figure out how successful you site is doing with findability, and use it to drive your investment. With this new information, there are some strategies for dealing with this. Typically if you do a good job with findability, you can maintain that with minimal effort, so once you address these issues (search and browse), you can put most of your effort on content.

Perhaps you do not have the skills to address your findability issues, well… due to findability being more of a one-time effort (and then monitor for issues), you can conceivably outsource much of the work. This is especially true for search. There are many folks that can do great on site design and architecture, and you may already have one of them on staff, but search is a completely different issue.

In future posts, I will talk more specifically about search, and recommended approaches. For now if you look for outside help, here are some issues to explore: 1) keyword vs. natural language search (hint: Google uses keyword); 2) tagging vs. no tagging: balance resources for tagging vs. performance without tags.

October 11, 2006

Web Analytics Wednesday | Web Analytics Demystified

Filed under: Web Analytics — Chris @ 8:41 pm

I had a great opportunity tonight to meet Eric T. Peterson, the author of Web Analytics Demystified, and several other web analytic colleagues. Eric has coordinated Web Analytics Wednesday all over the globe — this is a time for people to meet locally and discuss web analytics. I am just fortunate enough to live in Portland where Eric, Web Trends, and many Intel folks do.

I did not meet anyone with customer support experience like myself, but I definitely met many smart and enjoyable people. I was very impressed with how welcome I felt, and how well everyone was engaged in conversation. Kudos to Eric for putting this together — I wish that I would have attended one of these earlier. Be sure to check out Web Analytics Wednesday in your area.

Oh, and I got a copy of Web Analytics Demystified, which I will read and give a review here in the future. I also happened to order another of Eric’s books today, Web Site Measurement Hacks — looks like I have some reading to do.

Content Effectiveness and Functionalism: Update and Misc.

Filed under: Content Effectiveness, General, Web Analytics — Chris @ 4:43 am

Since I posted my whitepaper on Content Effectiveness and Functionalism, I have been working through the next steps: applying. In doing so, I discovered that I may not have always applied the right terms in some cases. The good news is, with Functionalism, the terms are not the important part, it is the methodology. So I have updated the paper with a slight variation to a few of the terms, but the KPIs did not change. If you go to the original post, you will get the update version, or you can get it here.

As I said, they key to Functionalism is the methodology, not the terms. In evaluating your site, whether you are focusing on content effectiveness like me, or another aspect, consider the purpose of each page in your site. A common method to use is a funnel and a fallout report. When you examine your fallout report, what actions do you take? Well if you used Functionalism, you would have KPIs for the page where you lost your users, and therefore you would have some idea as to where the page was failing.


On a completely different subject, this weekend I posted on my other blog two keyboard shortcut guides. There is one guide for Windows XP Keyboard Shortcuts and the other is for Firefox Keyboard Shortcuts. The guides can be printed double-sided and folded in three (tri-fold), and they make a handy desk reference until you learn the shortcuts. Follow the links above to get your own copy or go to my blog Skimming the Cream Off the Top.

October 5, 2006

Assumption: The Content Exit Page

Filed under: Content Effectiveness, Web Analytics — Chris @ 4:55 pm

I made some assumptions in my whitepaper on Content Effectivenss and Functionalism, so I want to discuss them over the next few weeks. The first assumption is that most users of your site will exit after reading (or viewing) the information they need, or when they give up. This assumption means that the best way to validate whether your content was effective or not, is to look at what the customer did after reading it. This is a model I have followed for years, but was recently supported by Elliott Masie.

As Mr. Masie pointed out in his discussion of Fingertip Knowledge (see my post: Fingertip Knowledge: Learning-on-Demand), people are more and learning as they need it. So, they are not reading ahead and learning what they may need later, rather they are searching for an answer once they run into an issue. Likewise, once they find the answer, they are going back to their original task. Take for example the case of a user who sent a document to their laser printer, and it jams. The user was not trying to learn all about the printer and specifically jams, the user was working on a document and wanted a printed version. Therefore the user will go to a support site, try and learn how to correct the jam problem, and then return to their real work.

You can apply this same concept to other support situations too. Take the example of the user who is writing code and he or she runs into a problem — perhaps a syntax issue. They go to Google, enter related terms, filter through the results. Once the user finds the answer, he or she does not continue this process, they go about their original task — writing code. So, from the perspective of the support site, they see the last transaction with the user is reading a piece of content.

If you are still with me, it is easy to then apply Functionalism to measure how effective an existing piece of content is (our Explainer/Converter page). Divide your Exit Rate by the number of times the content has been viewed and you will get a percentage (result x 100). The higher the percentage, the more effective the content. For example, if you have 1000 views and 10 exits then the rating is 1%; likewise, 1000 visits with 100 exits is 10%. This is an easy way to identify valuable content. Remember to also use the Exit Propensity concept: look across all your content and identify your worst offenders. I would encourage you to consider some weighting too — i.e. apply the formula to your most frequently viewed content instead of all your content.

Back to our formula (Exit Rate / Page Views * 100) If the percentage is low, then you need to look at other measures. First, consider the causes. Likely if users are viewing the content, the title and description was compelling enough for the user to click-through. So I would consider these possible causes:

  1. Mismatch between title and description and the actual content
  2. Content is outdated
  3. Content is incomplete
  4. Content does not written at the users’ level or too complex

All of these symptoms are difficult to diagnose without having an expert evaluate it — which can be expensive. Therefore if you have a lot of this, it may payoff to look at your creation process. (We will save this discussion for another time.) With that being said, there may be some cluses to look for. If users refine searches and look at other content on your site after viewing this content, it may be a sign of mismatch or incomplete content. If users have a tendency to spend a long time with the content, then it is likely at the wrong level or too complex. Through process of elimination, you should conclude that other content likely fits the outdated category.

Enough for today… I think this gives a lot to consider how it could fit into your organization. Again, I will continue to address the assumptions in the whitepaper over the next few weeks. Later, I will dig deeper into the analysis and diagnostic issues, so we can make corrections based on what the data is telling us.

October 1, 2006

Content Effectiveness and Functionalism

Filed under: Content Effectiveness, Web Analytics — Chris @ 9:07 pm

My entire career has been in customer support, with the last 11 specifically in the web support space. Naturally everyone has asked me, “How do you identify a web site fix as an equivalent to a phone fix?” When I tell them, “Unless the user tells you that your site fixed their problem, you really cannot tell,” the next question is, “Then how do you know the site is working?” Well, it takes some assumptions: (1) if your content is right (timely, accurate, and relevant), then your users are solving their problems. And, (2) customers will leave when they have solved their problem or have gotten tired of looking for the solution. If you expand on assumption (2), by watching where customers exit on your site, the number of searches made, and content pages viewed, you can get a really good picture of what is working and what is not.

With these assumptions in mind, I set out to try and identify the content effectiveness on my site – as that is what drives customer success. In this quest, I looked for others who are already doing this, others who have a different view, or others who have a solution for identifying a self-support success. It seems the web analytics vendors have yet to work in this space and the call center organizations are still stuck on trying to find the phone fix equivalent. In other words, I have not been successful in my quest.

Recently though I found a whitepaper, Functionalism: A New Approach to Web Analytics on Gary Angel’s blog, SEMAngel. Gary is the President of SEMphonic, a company that has over 10 years of experience in the analytics space. To make a long story short, I was able to take the methodology from the whitepaper and apply it to content effectiveness. The whitepaper I wrote, Content Effectiveness and Functionalism, discusses how you can use Functionalism to identify the health of your content, and where to focus your efforts to improve your content (its effectiveness).

Below is a short list of basic assumptions I made in writing the paper.

1. Your support set is segmented by product. (In addition, it can also be segmented by language and region/localized.)
2. Your primary support site goal is to provide self-support.
3. Majority of your users use the Fingertip Knowledge approach. (Most users use your site to solve an immediate issue or problem, not for proactive learning.)

In future posts I will discuss the methodology in further detail, answering questions, and drilling-down to the next level of the methodology.

Blog at WordPress.com.