The Role of Theory
Choosing Human-Computer Interaction (HCI) Appropriate Research Methods
Logging & Automated Metrics
"Usability is the measure of the quality of the user experience when interacting with something - whether a Web site, a traditional software application, or any other device the user can operate in some way or another." said Jacob Nielsen in one of its articles. The measuring can be done in many ways, by choosing one or more of the usability methods that are available to the designers/developers of a product.
This study focuses on two categories of research methods, logging and automated evaluation, each with its subcategories.
Logging: the collection and recording of information.
Server (hit) logs: collections of data about which pages are getting visited on a website and which path people are taking through the website.
Client (event) logs: collections of data about user-initiated activities within a web page of a visited website.
Proxy logs: collections of data about users action on the web; the proxy mediates between the client browser and the web server and logs all communication between the two.
Self-reporting logs: paper-and-pencil journals in which users log their actions and observations while interacting with a product.
Journaled session: a user testing situation in which usage data are automatically recorded into logs.
Automatic evaluation: measuring the usability of a system automatically, by standard inspections, simulated human interaction sequences, automated capture of user interaction data and user feedback.
3. Description of methods
can be manual or automated.
Automated logging involves having the computer collect statistics about the detailed use of a system. Typically, an interface log will contain statistics about:
Statistics showing the frequency of use of commands and other system features can be used to optimize frequently used features and to identify the features that are rarely used or not used. In addition, an analysis on patterns of use can be made using the logging data.
Statistics showing the frequency of various events, such as error situations and the use of online help, can be used to improve the usability of future releases of the system.
Usability studies on web-based applications make use of two types of logging techniques: server logs and client-side logs. These logs offer useful data about the users-website interaction. The data may be studied to generate inferences about the website design, to test prototypes over time and to test theoretical hypotheses about the effects of different design variables on web users' behavior.
Server logs provide a high level overview of the pages that a site user had visited. They contain information about which document was requested, at what time, whether it was successfully delivered, and the address that the request came from. There are four types of server log files:
Client logs can identify problems or difficulties that a user is experiencing with the interface. They contain information about user-initiated action performed while viewing a web page, such as scrolling, clicking, filling out a form, path taking, etc. Client-side logging tools are predominantly used as a mean of collecting data in a controlled study environment, rather than in commercial applications.
Some types of automatic
evaluation do not involve users at all (as in an HTML-checking tool
that tests for cross-platform compatibility or a dialog-layout tool that verifies spacing
and alignment properties). A computer can also simulate human interaction sequences (mouse
clicks, text entry) to test a product's robustness.
Proxy logs are a logging technique that is easy to deploy for any web site and is compatible with a number of operating systems and browsers. Proxy-based logging is done on an intermediate computer, and avoids many of the deployment problems faced by client-side and server-side logging. A good example of a proxy log is the WebQuilt project .
Journaled sessions data can later be analyzed to determine a user's pattern of behavior, find trouble spots, examine learning times, and corroborate observations in other media. This approach may help in automating the analysis of large volumes of usage data and helps in gathering data from remote sites. Journaled sessions allow one to perform usability evaluation across long distances and without much overhead. Once the code to journalize user's actions is in place, it is relatively inexpensive to distribute the test disk to a large number of participants.
Self-reporting logs are best used when there is no time or there are no resources to provide the interactive package required for journaled sessions, or when the level of detail provided by journaled sessions is not needed. For example, one might want just general perceptions and observations from a broad section of users.
1) NIST Web Metrics (http://zing.ncsl.nist.gov/webmet) :
Example of WebSAT session:
Please Note: The rules that are used in the above categories do not form a comprehensive set of guidelines; however, they are a sample set of typical rules to demonstrate the feasibility (and limitations) of an automatic checker.
After the user hits the submit button, a set of analysis results is displayed in the form of tables like this one:
2) NetTracker (http://www.sane.com/products/NetTracker/) a log file analysis software that provides detailed web site traffic reporting and web data mining
3) NetIntellect (http://www.netintellect.com/NetIntellect30.html) - 32-bit Log Analysis Tool that generates reports (Tables & Graphs) that show Statistical, Geographic and Marketing trends in the performance and usage of any Web site.
4) WebTracker (http://www.fxweb.com/tracker/) A web service that provides graphical logfile analysis.
5) HTTP-Analyze (http://www.netstore.de/Supply/http-analyze/) - A log analyzer for web servers
6) WET (http://zing.ncsl.nist.gov/hfweb/proceedings/etgen-cantor/) - Web Event-logging tool for the client side.
7) Bobby (http://www.cast.org/bobby) Web-based tool offered by CAST that analyzes web pages for their accessibility to people with disabilities.
8) W3C HTML validation service (http://validator.w3.org/) - a free service that checks documents like HTML and XHTML for conformance to W3C Recommendations and other standards.
Below are the results of checking this document for XML well-formedness and validity.
No errors found! *
Congratulations, this document validates as XHTML 1.0 Transitional!
Experiments and Studies
1) WebTrends conducted an analysis study  using the LogAnalyzer software package. Here are some of the results:
The Visits graph displays the overall number of visits to a Web site.
graph and the following table identify how often ads were viewed.
This table shows the total number of hits for the
site, how many were successful, how many failed, and it calculates the
percentage of hits that failed.
This table identifies the most popular browsers used by visitors to your site.
2) S. Trewin at the University of Edinburgh conducted a study  about input device manipulation difficulties.
This paper describes the pilot study for an experiment intended to gather detailed information about input errors made with keyboards and mice. This work is a step towards provision of dynamic, automatic support for the configuration of systems and applications to suit individual users. A detailed log of keyboard and mouse input was kept in order to analyze performance and errors. Here are some screenshots from the study with examples of data analysis from the logs:
3) M. Good's study "The Use of Logging Data in the Design of a New Text Editor"  examines how one technique, the use of logging data, was used throughout the design of a new text editor which is measurably easy to learn and easy to use. Logging data was used in four areas: keyboard design, the initial design of the editor's command set, refinements made later in the design cycle, and the construction of a system performance benchmark.
4) Melody Ivory and Marti Hearst research has 2 studies relevant to the subject of automated metrics and analysis. One of the studies  discusses the taxonomy for automated usability analysis and illustrates it with an extensive survey of evaluation methods. The study surveyed 58 usability evaluation methods applied to WIMP (Windows, Icons, Pointer, and Mouse) interfaces, and 50 methods applied to Web UIs. Of these 108 methods, only 31 apply to both Web and WIMP UIs. The second study  is a quantitative analysis of a large collection of expert-rated web sites which reveals that page-level metrics can accurately predict if a site will be highly rated. The analysis also provides empirical evidence that important metrics, including page composition, page formatting, and overall page characteristics, differ among web site categories such as education, community, living, and finance.
Finally, we provide a link http://www.internetqa.com/ to a free automated web testing web site that provides test tools, test plans, consulting and resources for testing and usability, and also a list of previous studies http://www.internetqa.com/web_tests/usability/white_papers.htm
1. Follow these steps in running a logging and evaluation process:
2. The inherent imperfections in collecting and analyzing data may be overcome by triangulating server logging, client logging and usability testing thus increasing the effectiveness.
1. Online Guide to Usability Resources
2. Hom, J. - The Usability Methods Toolbox
3. Ivory, M. - State of the Art in Automated Usability Evaluation of
4. Ivory, M.; Hearst, M. - Empirically Validated Web Page Design
5. Burton, M.; Walther, J. - The Value of Web Log Data in Use-Based
Design and Testing
6. WebTrends LogAnalyzer
7. Trewin, S. - A Study of Input Device Manipulation
8. Good, M. - "The Use of Logging Data in the Design of
a New Text Editor"
9.Hong, J.; Heer, J.;
Waterson, S.; Landay, J. - WebQuilt: A Proxy-based Approach to Remote Web
Last updated October 28, 2001