And Preference Metrics
Interface design has become an integral part of the software development process. It is no longer just enough for a software application to function correctly. It is also now as important for a software application to be usable and to provide users with a good user experience.
To support these usability efforts, usability testing has now become part of the software testing process. With this comes the development of usability metrics, which help software developers and interface designers evaluate the interface design of a software application and make changes as necessary.
2. Body
A. Evaluation Approaches and Methods
There are three main evaluation approaches, namely usability testing, field studies, and analytical evaluation (Rogers et al., 2007). These approaches make use of various methods such as modeling users’ performance; user testing; inspections; asking experts; asking users (for example, through questionnaires and interviews); and observing users.
Each of these approaches is intended to identify usability performance issues in a particular software application or user interface design based on specific metrics.
Usability Testing is an evaluation approach usually performed at the later stages of the design to ensure consistency in the way by which the system responds to the user, in the use of terms, and in the navigation structure (Rogers et al.). It measures the performance of users on typical tasks. Examples are measuring the time it takes for a user to complete a task and the number of errors that they make. In this type of evaluation approach, the testing format and environment is controlled by the evaluator.
The Field Studies evaluation approach involves testing in natural settings where people act naturally relative to how they use the product in their daily activities. This type of testing is performed for the purpose of evaluating technology; introducing technology or deploying technology in new contexts; establishing the design requirements; and identifying opportunities for new technology.
The Analytical Evaluation approach involves two methods, namely theoretically based models and inspections (Rogers et al.). Theoretically based models are used for the prediction of user performance while inspections include walkthroughs and heuristic evaluation. In walkthroughs, experts walk through scenarios by using a prototype of the software application. Heuristic evaluations, on the other hand, refer to the application of knowledge of typical users based on standards and guidelines, which is then used for the identification of usability problems.
It should be noted that the presence of users is not required in the analytical evaluation approach.
B. Usability Metrics
Usability metrics have long been used in the field of software engineering as a way to measure software quality (Jacko, & Stephanidis, 2003). Although numeric figures cannot completely represent a software application’s usability, they still promote some level of understanding of such.
There are three main categories of usability metrics, namely design metrics, performance metrics, and preference metrics.
Design metrics are used for the assessment of the quality of prototypes and designs. Examples are manufacturing prototype development costs, the “number of gates and engineering” (Kamal, 2008, p. 3), physical size, and power dissipation.
Performance metrics are used for the measurement of the actual usage of the completed software. Measurement is usually conducted by having “a group of test users perform a predefined set of test tasks while collecting time and error data” (Jacko, & Stephanidis, p. 540). Some examples are the number of user errors; the time it takes for users to complete a task; the number of different types of tasks that a user can complete within a given period; and the time users spend on recovering from errors (“Performance Measurement,” n.d.).
Performance metrics are useful in the assessment of the overall usability of a software application (Jacko, & Stephanidis) where usability refers to users being able to use a web site or software product for the achievement of their goals in a timely manner and in a manner where they get satisfying results (Bevan, 2008). It should also be noted that these metrics can be “evaluated at run time in a simple manner” (Jacko, & Stephanidis, p. 540).
Preference metrics are used for the quantification of preferences and subjective evaluations of the users. These are used for the measurement of user satisfaction, which is also a determinant of a product’s success in the market (Jacko, & Stephanidis). Examples are learnability, control, helpfulness, efficiency, and affect.
Preference metrics are also designed to assess the following interface design elements that affect usability: facility, acquisition, interpretation, organization, aesthetics, and valence. It should be noted that the intrinsic qualities of these metrics make them difficult to assess at run time.
Other classifications of usability metrics are usability in use, and interface usability. The former refers to metrics used for the measurement of the result of using the entire system while the latter refers to the measurement of the quality of the user interface (Bevan).
Metrics under Usability in Use include safety; flexibility in use; satisfaction in use; productivity in use; and effectiveness in use (Bevan). Metrics under User Interface Usability, on the other hand, comprise of system acceptability, which in turn includes practical acceptability (usefulness, reliability, compatibility, and cost) and social acceptability (Bevan).
C. Usability Measurement Standards or Models
There is really no single standard or model for measuring usability, which is perhaps why “most software developers do not apply correctly any particular model in the evaluation of usability” (Seffah et al., 2006).
Nevertheless, standards for measuring usability have been developed by experts, and some of these are discussed in this section.
There are three ISO/IEC standards for the measurement and quantification of usability (Seffah et al.). One of these is the ISO 9241-11, which identifies satisfaction, effectiveness, and efficiency as major usability attributes.
Another is ISO/IEC 3126-1, which defines usability as a software quality attribute that is comprised of the following factors: “usability compliance with published style guides or conventions for user interfaces” (Seffah et al., p. 162); attractiveness; operability; learnability; and understandability.
The third standard is ISO/IEC 9126-4, which defines quality in use as an attribute that is comprised of the following factors: safety, productivity, and effectiveness or usefulness.
One of the earliest software quality models is the FCM (Factors, Criteria, Metrics) or GE (General Electrics) model. This model consists of forty-one specific quality metrics; twenty-five quality criteria, and eleven quality factors. In this model, the quality factors consist of hypothetical constructs that correspond to users’ perception of the system’s external view. This model indicates that hypothetical constructs such as software comprehension and software usability cannot be directly measured and that they can only be indirectly inferred through observed measures such as performance evaluation, user satisfaction, and perceived effectiveness.
Usability in MCCall’s model (Seffah et al.) consists of three criteria namely effectiveness, training, and operability. This quality model is based on criteria that can be related to multiple factors and that can be measured with specific metrics. Similarly, Boehm’s quality model (Seffah et al.) is hierarchical and is based on higher order characteristics, which are hypothetical constructs and which cannot be directly measured. Boehm’s model also consists of nineteen different quality factors, which encompass portability, maintainability, and product utility.
3. Conclusion
Various kinds of usability performance issues are encountered with software applications. Because of this, it is imperative for a software application to undergo usability testing before being released to the public. This gives software developers and interface designers a chance to make modifications to reduce, if not eliminate, the usability issues. This is also to ensure that users can easily learn to use the software and can have a good experience doing so.
Usability measurement standards have been developed to provide interface designers and software developers with the metrics they can use in evaluating the usability of a software product. Although these usability measurement standards have yet to become truly “standard,” they still prove to be useful in helping software developers and interface designers come up with usable products, which users can find both useful and enjoyable.
References
Bevan, N. (2008). Classifying and selecting UX and usability measures. COST294-MAUSE
Workshop: Meaningful Measures: Valid Useful User Experience Measurement.
Retrieved from http://www.nigelbevan.com/papers/
Classifying%20and%20selecting%20UX%20and%20usability%20measures.pdf
Jacko, J. A., & Stephanidis, C. (2003). Human-computer interaction: Theory and practice.
Mahwah, NJ: Routledge
Kamal, R. (2008). Lesson 9: Challenges in embedded system design: Optimizing the design
metrics and formalism of system design. Retrieved from
http://www.dauniv.ac.in/downloads/EmbsysRevEd_PPTs/Chap01Lesson_9Emsys.pdf
Performance measurement. (n.d.). Retrieved from http://www.usabilityhome.com/PerfMeas.htm
Rogers, Y., Sharp, H., & Preece, J. (2007). Interaction design: Beyond human-computer
interaction (2nd ed.). Hoboken, NJ: John Wiley & Sons
Seffah, A, Donyaee, M, Kline, R. B., & Padda, B. K. (2006). Usability measurement and
metrics: A consolidated model. Software Qual J, 14, 159-178. doi: 10.1007/s11219-006-
7600-8