Sunday, February 23, 2025
HomeAnalyticsWhat Is a Typical No-Show Rate for Moderated Studies? – MeasuringU

What Is a Typical No-Show Rate for Moderated Studies? – MeasuringU


feature image with empty chairYou have grand research plans. Prototype ready. Stakeholders scheduled to observe. The time comes, and the participant is a no-call no-show.

No-shows are a fact of life when conducting UX research.

It happens for remote and in-person research. You need to plan for no-shows.

While over-recruiting makes sense, you also don’t want to over-over-recruit and waste tight recruiting budgets or scarce participants.

When participants call to cancel or reschedule, it at least allows researchers (and observers) to not wait around. It’s the no-call no-show rate that can make running moderated sessions frustrating. In this article, we use “no-show” as a shorter way of saying “no-call no-show.”

To plan for no-shows, it helps to know what to expect. So, what is a typical no-show rate? To help answer that question, we looked at published data as well as our own data.

In our 2018 article, based on our experience moderating studies up to that time, we suggested expecting a 10% to 20% no-show rate. However, that guidance is six years old, and it was based on some back-of-napkin estimates.

Although MeasuringU is primarily known for quantitative research, we still conduct a substantial amount of qualitative research, and many of our benchmark studies require a research moderator in attendance. Since 2018, the volume of MeasuringU’s annual moderated studies has more than tripled. We wanted an updated and more accurate estimate of no-show rates that takes into account the availability of new panel sources like User Interviews and Respondent.

User Interviews Data from 2022

User Interviews is a do-it-yourself (DIY) panel that allows researchers to recruit and compensate participants directly. Traditional panel companies don’t provide direct access to the larger pool of candidates but instead curate a list of candidates based on profiles and screeners, and they control the communication. The advantage of a DIY panel is you get access to potential candidates quickly and at a lower price per complete because less cost is associated with the overhead of project managers and recruiters. But does that direct access and lower cost come with a price in no-show rates?

In 2022, User Interviews published a detailed article about how incentives play a role in no-show rates. In that article, they analyzed ~20,000 of their completed projects.

Their no-show rates were defined as participants who were scheduled and confirmed but didn’t show up for their scheduled sessions (either in-person or remote). User Interviews serves a lot of companies. Consequently, they reported data on a lot of studies.

Raw numbers weren’t provided in the article, so to estimate values, we graphically interpolated the figures (e.g., values midway through axes labeled 0% and 40% would get a no-show rate of 20% and so forth). In referencing their article, we used their Figures 7 and 7a for business-to-consumer (B2C) no-show rates and Figures 14 and 14a for business-to-business (B2B) no-show rates, which allowed us to estimate values from most (but not all) of their 20,000 projects.

Table 1 shows our interpolated estimates from over 14,000 User Interviews studies. No-show rates ranged from a low of 0% to a high of 34%.

Audience Location Avg No-Show Rate # of Studies
B2C In-Person  8.3%    564
B2C Remote  8.3%  8,850
B2B In-Person  7.2%     27
B2B Remote 10.4%  4,769
Average | Total  8.6% 14,210

Table 1: Average no-show rate graphically interpolated from User Interviews 2022 data shown in their Figures 7, 7a, 14, and 14a.

As Table 1 shows, User Interviews broke down their data by type of user (B2B or B2C) and whether it was in-person or remote. They clearly facilitate a lot more remote than in-person sessions. We estimate they reported no-show rates on 13,619 remote to 591 in-person sessions. Their highest average no-show rate by category was from B2B remote sessions at 10.4%, and both in-person and remote B2C sessions had approximately 8.3% no-show rates. The lowest no-show rate was 7.2% for B2B in-person studies, but this was also their smallest sample at 27 projects. Confirming their general findings that incentives play a role in no-show rates, we ran a simple correlation between incentives and no-show rates by category on the data we derived from their figures. We found medium to strong correlations (from −.33 to −.81) between no-show rates and incentives (respectively, −.43, −.81, −.80, and −.33; weighted average correlation using the Fisher z-transformation: −.80).

2024 MeasuringU Data

But what if you don’t use User Interviews? How generalizable are those no-show rates to other recruiting sources? To complement the large User Interviews dataset, we pulled together our no-show rates for several projects in 2024. MeasuringU uses several recruiting sources, including User Interviews, Respondent, customer lists, and other professional panel agencies that our Operations Team coordinates.

We conduct thousands of moderated sessions annually across dozens of moderated studies in both our Denver labs and remotely (using the MUiQ® platform). We recruit a wide range of participants from general consumers to highly technical or specialized users.

We looked at our most recent 24 studies, all conducted in 2024, in which 958 people participated in 17 B2C and 7 B2B studies. Of these 24 studies, 10 were in-person in our labs, and 14 were remote. Table 2 shows the average no-show rate was an impressive 5% overall, only slightly higher for in-person research.

Location # Studies No-Show Rate
In-Person 10 5.8%
Remote 14 4.5%
Total | Average 24 5.0%

Table 2: No-show rates for 24 MeasuringU studies with weighted average.

We attribute our impressive no-show rate to the additional (stellar) layer of recruitment effort from our Operations Team. This involves another level of screening, reminding, and qualifying that goes into professional recruitment (and a dedicated Operations Team!). The study sessions ranged from 30 minutes to two hours, with incentives averaging about $150 (ranging from $50 to $225). We also found a negative correlation of r = −.50 between no-show rates and incentives, confirming the User Interviews finding that increased incentives help to decrease the incidence of (but don’t eliminate) no-shows.

So far, we’ve provided an aggregate no-show rate for in-person and remote work. But what happens to the no-show rate when the weather is inclement, and people are scheduled for an in-person study?

Our labs are in Denver, Colorado, which, at an elevation one mile above sea level, is where the plains meet the Rocky Mountains. It’s arid and sunny most of the year, but we get our fair share of snow. So, we’ll call no-shows in this weather the snow-show rate.

We’re not talking about trying to participate during a snow storm in the south where a dusting turns into Snowmageddon or when there’s a travel ban in place. Denver-based participants are generally used to driving in snow.

To estimate a snow-show rate, we looked at three days of data in November when we had scheduled a lot of in-person studies and then experienced three days of heavy snow—somewhat unusual even for Denver in November.

Total Scheduled 128
No-Show  18
Rescheduled/Called to Cancel  36
Showed Up  74
Snow-Show Rate (No-Show/Total Scheduled)  14%

Table 3: “Snow-Show” rate on three snowy days in early November 2024 for in-person research in Denver.

So, relative to our typical in-person no-show rate, when it snows, the rate more than doubles.

Our analysis of no-show rates based on data from over 14,000 moderated studies revealed:

The average no-show rate is about 8%. Averaging across a large dataset from User Interviews (14k studies and 9% no-show) and from MeasuringU’s Operations Team (24 studies and 5% no-show), we found the average no-show rate fluctuates between 5% and 10%. A conservative estimate would be to use 10% as the typical no-show rate (within the range we estimated in 2018).

No-show is not the same as the usable session rate. If we use a no-show rate of 10%, does that mean you should plan to over-recruit by 10%? You could, but that will almost certainly mean you will not have enough participants because the no-show rate doesn’t include people who call and cancel/reschedule or those who show up but end up being unusable because they didn’t qualify (usually from misinterpreting or misrepresenting themselves on a screener). Table 3, which outlines our snow days, provides some idea about the expected yield. We’ll dig into this more in an upcoming article.

Higher incentives help reduce no-show rates a little. We corroborated the findings from User Interviews that higher incentives lead to lower no-show rates. The correlation was about r = −.5, meaning incentives can explain (help reduce) about 25% of no-show rates.

Snow doubles the no-show rate (the snow-show rate). When it’s snowing (even in a snowy city), it’s not surprising that people are less willing and able to come in. Our snow-show rate for in-person research was approximately 15% when snow was coming down. It’s unclear how this may apply to your city (e.g., great vs. poor public transportation) or other weather events like heavy rain or winds.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments

Skip to toolbar