Summary: In pilot studies, you can occasionally relax the need for real users and let members of your own team serve as test participants. It's good for them.
If you work in any form of user experience role — or even manage a company with a user interface, such as a website or intranet — you should be a test participant in a usability study now and then.
There are four reasons to do this:
- You better appreciate your average customers when you have to struggle with a new and unfamiliar user interface.
- You gain empathy with the test users in your regular usability studies. Discovering how stupid and embarrassed you feel when you fail a task helps you understand the importance of making test users comfortable in test situations.
- You sharpen your skills as a test facilitator by observing the test from a different angle.
- Finally, you offer a "warm body" to fill a slot for pilot testing, freeing up the real users for the actual study and thus saving on your recruiting budget.
Pilot Testing = Relaxed Recruiting Criteria
I've harped endlessly in previous columns about the need to recruit people who represent your target audience to serve as test participants in user testing. This requirement remains essential for valid test results. Testing the wrong people means that you get the wrong findings.
I've also repeatedly said that designers are not users, and even that vice presidents are not users. In fact, anyone who works at your organization knows too much and isn't qualified to represent a test user, even if he or she meets the profile of your target audience. (Exception: for intranet studies, you obviously do want to test your own employees. Stay away from members of the intranet team or the IT department, though.)
Given these rules, how can I recommend that you serve as a test participant?
Using insiders as test users works in pilot testing, and pilot testing only.
The distinction between a pilot test and a regular test is that pilot testing is concerned only with refining the test methodology. We're not looking for actual usability findings to improve the user interface. We want only to improve the test itself, so that once we bring in real users, we can squeeze out of them as many insights as possible in the limited time we have for the study. (For more on how to run studies, see our full-day Usability Testing course.)
Still, it's best to use representative users as pilot test participants, because it lets you more realistically assess the test plan. You might, for example, want to know whether you have the right number of test tasks for the available time, and internal users typically complete tasks much faster than external users.
So, if it's easy for you to recruit lots of members of the target audience, go ahead and "burn" some of them as pilot users. If you do this, you can use some of the pilot session's qualitative findings as real usability findings. However, you can't combine quantitative data between pilot sessions and normal test sessions, because you'll have refined the test script between the two tests. Which is, of course, the point.
Assume that you'll change the test plan after pilot testing. In fact, if you're a less-experienced usability specialist or if you're running a particularly high-risk or high-profile study, you should run many rounds of pilot testing and improve the test plan after each round. Just because you're testing a study design instead of a UI design doesn't change the basic value of iterative design for driving increased quality.
Ethics of User Testing
To my knowledge, our training course on user testing is one of the very few to cover the ethics of working with human subjects.
One of the key ethical requirements is to protect participants from mental anguish. We don't want people to leave our study feeling depressed or worthless because they repeatedly failed at using an "obvious" computer system. It's very easy for users to blame themselves for the many errors and miscomprehensions they encounter in a typical usability study.
Of course, we always say at the outset that, "we're testing the design, not you." But people tend to forget this point when wrestling with a difficult user interface.
Thus, it's important that test facilitators take active steps to make participants feel comfortable. We want them to leave happy and feel energized about having helped improve an important design that they might actually use one day. Not only is this an ethical imperative, but it also helps us pragmatically in getting referrals from among the user's friends and colleagues who might be good candidates for future studies.
It's easy to emphasize the emotional aspect of user testing in our courses, but there's still nothing as powerful as personal humiliation to drive home the need to be gentle with users.
Clearly, there are benefits to periodically changing roles and being the user. Every 2 years or so is a good frequency for such an experience. The rest of the time, even for pilot testing, working with real users is best.