|\\ Is it Possible to use Quality Metrics on the Web?|
|back to Q&A|
by Donn Le Vie
1. What is your role?
2. Metrics are usually associated with the quality of tangible products. Is it possible to use quality metrics for web content?
All too often, quality metrics end up being simple mechanical measures, such as determining the number of "errors per page," "pages per error," or some similar nonsensical measure. Unless you tightly define the criteria for "error," no one knows if you mean typographical errors, printing errors, or errors of fact in the content. For internal quality metrics, such metrics can be of use but only when tied to a clear, unambiguous definition of the quality criteria such a measure provides.
In the end, you may produce a document or web page with "zero errors," yet the content may not be appropriate, accurate, or reliable. You have met the mechanical "quality" requirement of "no errors," but the value of the metric is practically zero. This type of problem has plagued documentation departments for years...and still does in many companies. Such mechanical metrics were used in previous years to somehow try to justify the overhead expense of having a documentation or publications department. The problem with that thinking is that upper management begins to think that anyone can "do documentation" if all you have to worry about is meeting a number in a mechanical metric.
The same sort of danger is present for web designers, web developers, and webmasters who insist on promoting and reporting mechanical measures as indicators of "quality."
3. Can you give examples of quality metrics that might be designed for an e-commerce site?
So long as the people who are defining the value of that usability are customers, I think that feedback is necessary to continually and continuously maintain a top-quality e-commerce site.
Many companies are still in the dark when it comes to determining how many online customers exist for their products. Part of the problem lies with an absence of standards for measuring as well as standard measurement criteria, and part of the problem is that the complexity of the Web inhibits the establishment of a standardization process, which helps compare apples to apples -- or web sites to web sites. It's a sort of Catch-22 situation.
Sure, there are site visitor measurement and analysis tools, such as Andromeda, Clickshare, and Open Market that can track visitors to sites or capture customer e-ommerce transactions over the Web, but these products offer various levels of detail and accuracy.
4. Who should manage this effort?
5. Quality metrics are sometimes seen as cumbersome and "bureaucratic." How can quality managers avoid these labels?
Quality metrics become meaningful when you can assign a quantitative figure to the definition of that metric. As I mentioned earlier, part of the problem with assessing quality on the web is the lack of standards, and we don't have standards because of the dynamic nature of the web.
In order for quality metrics to add value, you need to stay away from "yes/no" questions. Instead, ask questions that can help you determine a quantitative valuation that upper management understands.
A good example of this is the eight-second rule. Asking "Do the pages load in less than eight seconds?" yields a non-value-add response. But you can estimate the cost -- a hard number -- to your company if your pages take longer than eight seconds to load. If it costs $X to retain a customer, and according to advertising/marketing studies, it costs $8X to obtain a new customer, you have incentive to determine what percent of daily, weekly, or monthly hits are repeat hits (using logs from your web server) with a page-load rate of more than or less than eight seconds. If you estimate that 40% of your customer are repeat customers, and that perhaps 40% of them will not be repeat customers with a page-load rate of more than eight seconds, you can come up with a formula for roughly estimating the projected savings in customer retention costs by using the eight-second rule.
That's the kind of information that gets upper management to take notice of your efforts…not the simple fact that your pages load in eight seconds or less.
6. Web sites are becoming more and more varied. Is one set of metrics sufficient or are multiple sets required?
7. What resources and tools should be used to measure quality on the web?
Suggestions for web design and navigation guidelines can be easily converted to quality checklists as long as they can be related directly to customer requirements.
I remember reading one of Nielsen's DevHead columns were he used regression analysis and other statistical tools to determine if consecutive incremental hit rates for a particular site were just background noise or were valid data that suggested an increase in the site's traffic. I actually borrowed his technique and applied it to a customer support call center problem. The center wanted to know if the decreasing trend in the number of phone calls to the support center was a spurious trend or if it signified something more meaningful. The statistical analysis revealed that the decrease in the number of calls was significant as it reflected the documentation team's efforts to post problem resolutions on the customer-support web site within 24 hours. Rather than calling the support center, customers could find some of the answers to their problems by referring to the web site.
That was a case where we showed a direct cost savings as a result of using a quantitative quality metric (in this case, a statistical measure) that could be directly correlated with a cost savings. There was one of two ways we could have presented this information to upper management:
1. Calls to the customer support call center decreased by 7% last month, or
Which of these metrics meant more to upper management?
Remember: the whole purpose behind tying quality metrics to customer requirements is to derive a quantitative value of that quality metric. If such a number cannot be determined by a quality metric, you could derive a ballpark quantitative measure by asking what would it cost the company if those quality parameters were not measured or implemented?
8. What advice can you give webmasters who are looking to define metrics for their multilingual web sites?
9. When is the right time to deploy quality metrics in a project?
As the prototype site is being developed, it is critical to schedule testing of design and functionality features as they have been defined in the Requirements Specification. Test for "look and feel," for navigation ease, for add-on features such as scripts. Tester feedback should be open-ended to allow testers to elaborate on their responses. Use a variety of information-gathering formats, such as checklists, questionnaires, and online forms for the dynamic feedback the design and development teams (and customers) need to hone in on those requirements. Repeat the process for beta testing prior to customer release to confirm the recommendations of the previous testing sessions have been incorporated into the project. Have the customer participate in the test, and have them sign off on the beta test to indicate that all requirements have been met.
As we all know and have probably experienced, once the site is made available to the public, the unsolicited feedback begins pouring in. Actually, if you're in the webmaster business, there is no such thing as "unsolicited" feedback, because every kind of user response -- good, bad, or indifferent -- should be evaluated for its potential value in helping improve the site.
No client I've ever dealt with anticipated every user requirement upfront in a spec. In fact, if the system you delivered meets or exceeds each and every customer requirement, that's great -- but you're only halfway there. Sometimes the most important requirements show up in user feedback after the site has gone live. Another way of thinking about the live web site is as an ongoing 24 x 7 usability test, or a metric waiting to be measured.
You design metrics when you are gathering customer requirements; you deploy metrics when you begin the design and development of the site; and you collect metrics during the entire lifecyle of the site. Don't forget about the covert feedback that's available from Internet server logs about your site's usage. Mine those logs for information that can help you add or change links to encourage users to follow a specified click-through pathway to other areas of your site.
10. Which people or departments within an organization should be involved in the definition of metrics?
11. How, specifically, would you measure the linguistic quality of a localized web site?
Documents translated from English often ignore the design ramifications of translating to a target language. For example, translating from English to Icelandic requires some knowledge or awareness of Icelandic grammar. A seven-line paragraph in English could easily translate to a 12 or 15-line paragraph in Icelandic, which creates design issues, especially when using embedded graphics, photos, and line art.
The most successful measures of linguistic quality for a translation always use a native speaker to help design and provide feedback during the project life cycle.
12. Is the tracking of quality metrics done manually or are there tools available that can assist in the collection and analysis of data?
13. What are the risks associated with NOT defining quality measures?
Donn Le Vie, Jr. currently works for Intel's Network Computing Group in Information Engineering. He is a former research geological oceanographer with NOAA, geologist/geophysicist with Phillips Petroleum Company, and adjunct faculty lecturer with the University of Houston Downtown College. His previous employers and clients include NASA, Motorola, Intel, Synercom Technology, Tadpole Technology, Fisher-Rosemount, M2K, Association of Certified Fraud Examiners, SEMATECH, and Integrated Concepts, Inc.
Donn is a frequent presenter at national and international conferences and a featured speaker at many regional meetings. He has authored more than 60 technical and scientific publications, more than 600 general-interest articles, and two non-fiction books. His latest book, Designing eCommerce Proposals that Win New Business, is due out later this year, and focuses on integrating a component-based proposal design process with a business opportunity evaluation methodology that guarantees at least an 80% proposal success rate.
© 2001-2010 ForeignExchange Translations, Inc.