A Taxonomy (Visual Overview) of 70+ UX Metrics – MeasuringU


A feature image showing multiple, colorful, square icons representing various functions that are arranged in a grid pattern. Large white text "70+" is prominently featured in the upper right quadrant of the image.Measuring the user experience starts with UX metrics. But there is no single UX measure—no universal gauge that provides a complete view of the user experience. Instead, we rely on multiple metrics, each offering an incomplete yet complementary perspective.

We’ve identified over 70 UX metrics, encompassing action metrics (what people do) and attitude metrics (what people think and how they feel).

Managing and mastering 70+ different metrics can be overwhelming, so it helps to categorize them. UX metrics can be broken down into task-based and study-level metrics:

  • Task-based: These metrics focus on a representative user attempting to achieve a realistic goal (e.g., finding a movie, purchasing a product, booking a hotel room). Tasks are the cornerstone of usability testing. While assessing a task experience may not always be feasible due to budget, timeline, or product-access constraints, observing participants as they complete tasks can reveal usability issues and inform your design improvement efforts.
  • Study-level: These are collected once per study (unlike task metrics, which are gathered per task). These provide a broader reflection on a product experience, capturing factors such as overall satisfaction, perceived usability, usefulness, and behavioral intentions (e.g., likelihood to purchase or recommend).

While it’s useful to classify UX metrics based on these attributes, it’s also helpful to view them in a high-level framework. Figure 1 organizes these metrics into a coherent structure—thank you, Phil Siebler, for designing the infographic!

Infographic table showing an overview of 70+ UX Metrics

Figure 1: Overview of 70+ UX metrics (as of the publication date of this article).

Beyond organizing UX metrics, we’ve introduced three additional dimensions to consider when selecting metrics: Popularity, Ease of Collection, and Reference Benchmarks. Each dimension is represented in Figure 1 with a shape (triangle, circle, square) and a color (green, yellow, red).

image of a green triangle, yellow circle, and red square

1. Popular Usage (Triangles)

Selecting UX metrics isn’t a popularity contest, but familiarity does play a role. When a metric is widely known, it’s easier to communicate, interpret, and benchmark.

We categorized metrics into three popularity levels based on our experience reviewing reports, published papers, and historical references (including our 2009 paper on correlations among prototypical UX metrics). Popularity evolves over time and isn’t based on rigid thresholds:

  • image of a green triangleWidely used: Metrics such as the SUS, the SEQ®, and completion rates are industry standards.
  • image of a yellow triangleModerately used: Metrics such as Customer Effort Score (CES), AttrakDiff, and SUISQ have valid applications but are less common.
  • image of a red triangleRarely used: Metrics such as Microsoft Net Satisfaction (NSAT) and the Technology Acceptance Model (TAM) are used infrequently in applied UX research. However, infrequent use doesn’t imply low quality—metrics like eye-tracking are niche but valuable in the right context.

2. Ease of Collection (Circles)

The easier a metric is to collect, the more likely it will be used. However, some harder-to-collect metrics offer significant value, justifying the effort to collect them.

  • green circle Easy to collect: Single-item scales such as SEQ require minimal effort. Task time and completion rates are also relatively easy, especially with automated tools (e.g., MUiQ®).
  • yellow circleModerate effort: Error rates provide rich diagnostic insights but require careful observation and precise definitions.
  • red circleDifficult to collect: Advanced metrics such as eye-tracking time to first fixation and facial expression analysis require specialized software and equipment, making them less accessible to UX researchers.

3. Reference Benchmarks (Squares)

Comparing a UX metric against an established dataset simplifies interpretation and improves stakeholder communication.

  • green squareWell-benchmarked: Metrics such as the SUS and SEQ have extensive published norms. For example, a SUS score of 80 is considered very good—better than 90% of a dataset of 500+ products (an A− on the Sauro–Lewis curved grading scale for the SUS). The SEQ also has established norms indicating what constitutes an easy or difficult task.
  • yellow squareModerately benchmarked: The SUPR-Q® is well-structured but requires a license for scoring. Post-task confidence has limited benchmarks.
  • red squareContext-dependent or limited: Metrics such as completion rates and task times depend heavily on the specific task and failure consequences, making external benchmarks challenging (although there are rough benchmarks for completion rates). Metrics such as Customer Effort Score (CES) and clutter measurement show promise but lack sufficient data to establish reliable benchmarks.
  • gray squareNo benchmarks: No benchmarks are available.

Choosing the right UX metric starts with understanding what you’re trying to measure. However, three practical considerations—popularity, ease of collection, and reference benchmarks—help guide decision-making.

green triangle circle and square

Ideally, you want metrics that score well across all three dimensions. Instead of aiming for three red cherries on a slot machine, you want three green shapes (popular, easy to collect, and benchmarked). Among our 70+ UX metrics, only a few consistently meet all three criteria:

  • SEQ (Single Ease Question)
  • LTR/NPS (Likelihood to Recommend / Net Promoter Score)
  • SUS (System Usability Scale)

While these three metrics offer a strong starting point, they shouldn’t be the only ones you collect. In an upcoming article, we’ll discuss which four metrics we recommend if you could pick only a few—and why.

Note: This taxonomy is a living document. Like anything else, the popularity of metrics can rise and fall, new methods make formerly difficult metrics easier to collect, and metrics that do not have benchmarks this year might have some next year. We plan to update this infographic over time, so in the future, you can click here for the latest version.

We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Compare items
  • Total (0)
Compare
0