A lot goes into planning a moderated session such as a usability test or an in-depth interview. And recruiting qualified participants is an essential step in making those sessions successful.
However, recruiting is time-consuming and often frustrating. There’s a sort of Murphy’s Law in recruiting (and in product demos)—the Stakeholder Corollary:
The more stakeholders observing, the more likely things are to go wrong.
What can go wrong? A few we’ve experienced over the years range from common setbacks:
- No shows
- Late arrivals
- Reschedules
- Participants misrepresenting their qualifications
- Product/prototype issues
To less common but equally pernicious problems:
- Power outages
- Internet outages
- Freak weather events (snow, ice, hurricanes)
- Intoxicated (or high) participants
- Uncooperative participants
In an earlier article, we looked at the no-show rate for in-person and remote sessions for both B2B and B2C recruited sessions. In that analysis, the average no-show rate was about 8%. That average comes from a large dataset from User Interviews (14k studies and 9% no-show) and our own MeasuringU’s Operations Team data (24 studies and 5% no-show). The average no-show rate fluctuated between 5% and 10%.
A conservative estimate would therefore be to use 10% as the typical no-show rate (within the range we estimated in 2018).
Computing a no-show rate can help you better plan your research schedule. If you expect around a 10% no-show rate, you can over-recruit to compensate for the loss.
However, as our list above shows, a no-show is not the same as the usable session rate. Over-recruiting by only 10% (a typical no-show rate) would mean you will almost certainly not have enough participants because you haven’t factored in any additional roadblocks, such as planned cancellations, disqualified participants, or other technical issues.
While over-recruiting makes sense, you don’t want to over-over-recruit and waste your tight recruiting budget and scarce participants. The first step is computing a recruitment/over-recruitment rate.
The over-recruitment rate is the number of participants who must be recruited to fill the desired number of slots.
For example, suppose you need 20 participants for a moderated study. You recruited and scheduled 20, but not all sessions were usable because:
- Two were no-shows (−2).
- One was unqualified and didn’t know how to use the product (−1).
- The prototype didn’t work, and one participant couldn’t reschedule (−1).
Of the 20 scheduled, you’d have data from 16 participants. Assuming you need all 20, you’d have to recruit at least four more participants to replace the ones you couldn’t use. If you recruited 24 to get 20 usable sessions, the recruitment rate would be 120% and the over-recruitment rate would be 20%.
A few additional things to think about the over-recruitment rate:
- If all 20 participant sessions were usable, the over-recruitment rate would be 0 with a 100% recruitment rate (20/20).
- If you didn’t replace the participants from no-shows or usable sessions, your total sample size is reduced, making a less meaningful recruitment rate at 100% (16/16).
- The over-recruitment rate can never fall below 0%.
- If you have a no-show and re-recruit but the replacement participant doesn’t show (or cancels), then it counts as another recruitment. In some difficult-to-recruit studies, it’s not uncommon to have to replace replacement participants.
It’s quite common to need to reschedule (UX testing is a human activity and conflicts arise). It may be initiated by the participant (a conflict emerges, for example) or by the researcher (the product isn’t working correctly, or a moderator isn’t available). If a participant is successfully rescheduled, it doesn’t count toward the over-recruitment rate. If they aren’t able to reschedule and need to be replaced, it does count toward over-recruitment.
So, with some idea on how to compute an over-recruitment rate and how to handle rescheduling, we gathered data that we could use to estimate over-recruitment.
MeasuringU Over-Recruitment Data from 2024
As we did for the no-show rate analysis, we looked first for published data. This time we didn’t find much, only some general recommendations to over-recruit by 10% based on the no-show rate or to simply recruit one or two extra participants.
So, we returned to our internal data to develop a more evidence-based recommendation. We conduct thousands of moderated sessions annually across dozens of studies in both our Denver labs and remotely (using the MUiQ® platform). We recruit a wide range of participants, from general consumers to highly technical or specialized users.
We pulled together cancellations, no-shows, and unusable rates for our most recent 30 projects in 2024. In those studies, 1,183 people participated in 20 B2C and 10 B2B studies. Of these 30 studies, 11 were in-person in our labs and 19 were remote. In only one of those studies (n = 16) were there no cancellations or unusable sessions, so we had to over-recruit in 29 of 30 studies (97%). In other words, you should definitely have a plan to over-recruit or settle for fewer participants.
Table 1 shows an average over-recruitment rate of 19% was needed to achieve the desired number of participants. It was remarkably similar for both in-person and remote sessions (shown) and for B2C and B2B study types (not shown, 19% for B2C, 18% for B2B).
Location | # Studies | Over- recruitment Rate |
---|---|---|
In-Person | 11 | 18% |
Remote | 19 | 19% |
Total | Average | 30 | 19% |
Table 1: Over-recruitment rates for 30 MeasuringU studies with weighted average (24 of these were the studies we used to compute the no-show rate).
For simplicity, we’ll round up to 20% as a recommended default over-recruitment rate. Across those 30 studies, however, there was some variability (shown in Figure 1).
Figure 1: Dot plot of proportions of over-recruitment in the 30 studies.
As mentioned above, we had one study that required no additional recruitment (the leftmost dot in Figure 1). The worst study (rightmost dot) had a 51% over-recruitment rate (recruited 145 to fill 96 slots). The median of this distribution was 17% with an interquartile range (central 50% of scores) from 9% to 24%.
Applying a 20% recruitment buffer to these studies, we would have exactly hit the mark three times (10%), had too many participants 18 times (60%), and had too few participants nine times (30%). The tradeoff between under- and over-recruitment favors a strategy of slight over-recruitment because you can stop data collection and beat your deadline when you hit your desired sample size early, but you may miss deadlines when you under-recruit.
Keep in mind our recommended over-recruitment rate of 20% comes from our internal data only. We previously attributed our impressive no-show rate to the additional (stellar) layer of recruitment effort from the MeasuringU Operations Team. This involves another level of screening, reminding, and qualifying that goes into professional recruitment (and a dedicated Operations Team!). When we looked at the data published by User Interviews (the large self-service panel), their estimated no-show rate was almost double our rate. So be careful when generalizing these findings if you aren’t using a professional recruitment team or employing additional efforts to keep usable sessions high.
So, even when you’re making a substantial effort to reduce unusable sessions, you should initially plan to recruit 20% more participants than you need, then monitor and adjust as required.
To generate an estimate of how much you should plan to over-recruit, we:
Defined the over-recruitment rate. The over-recruitment rate is the percentage of participants you need to recruit over the desired number of participants.
Found almost all studies require some over-recruitment. Across the 30 studies in our sample, only one had a 0% over-recruitment rate (all scheduled participant sessions were usable) so 97% of studies required some over-recruitment. It’s a safe bet to plan to over-recruit unless you’re OK with having fewer participants.
Recommended an over-recruitment rate of 20%. Our analysis of 1,183 participants across 30 studies from MeasuringU revealed an average over-recruitment rate of 19% (increased to 20% for a recommendation that’s easier to remember). It was about the same for in-person versus remote and B2B versus B2C. There is some variability in our data, with 50% of studies needing to over-recruit between 9% and 24% more participants.
Explained why no-shows aren’t the only impact on recruitment. In addition to no-shows, other common reasons for having to recruit more participants include disqualified participants or problems with a prototype.
Determined that no-shows only accounted for around half of the unusable sessions. Although we don’t have detailed records on why we needed to over-recruit, given the average no-show rate was around 10% from our previous analysis, that leaves other reasons (such as prototype problems and disqualified participants) to account for the other half.
Noted that high-quality operations teams and procedures help. Our data is based on having an excellent in-house Operations Team that has procedures in place to reduce no-shows, increase rescheduling efficiency, and work closely with research and product teams. Keep that in mind when using a 20% over-recruitment rate, but it’s probably not a bad place to start, increasing or decreasing the rate as needed for your situation.