Flexible Usability Testing: 10 Tips to Make your Sessions Adapt to Your Clients’ Needs

by Jennifer Cardello on August 31, 2013

Summary: For testing assignments where client teams are ready, willing and able to take immediate action, being flexible with tasks within and between participants can offer better bang for your buck.


Many times, when a client wants to do a usability testing evaluation of a currently in-service environment/application, they are looking for a traditional summative test: A smallish set of rigid tasks geared toward measuring key performance indicators (success rate, task completion time, and errors) with the goals of 1) discovering issues and 2) providing evidence to support investment in fixing those issues.

However, as more organizations embrace agile and lean methods, decision-making has become decentralized and traditional metric-based evidence is not as crucial for some. Organizations still want to hire usability experts to plan and conduct usability testing (until it becomes an internal competence), but they may not be as interested in generating “evidence” to secure budget for changes – instead, they want actionable findings and recommendations covering the broadest scope of their environment/application. To meet this need, we’ve found there is tremendous value in being more flexible by:

  • Iterating tasks between and within sessions
  • Improvising and customizing tasks to better suit individual participants
  • Inviting real-time client participation

(Note that by "clients" we simply mean the people who have to take action on the usability finding. Your clients can be a traditional external consulting client if you're a consultant or work in an agency, or they can be internal stakeholders if you work inside the organization.)

For flexible testing you still create a test guide, but the tasks within are not treated as unchanging components. Instead, the tasks are treated as a starting point.

There are two benefits to this:

  • Decreased pressure on the moderator/planner to create the perfect tasks right out of the gate: Tasks that we agree will be tested and changed if necessary are a lot easier to create.
  • Decreased pressure on the client: Often times, members of the client team have never observed a real usability test, so to ask them to come up with or approve a definitive list of tasks and expecting it to be thorough is silly; they simply don’t know what they don’t know. Once they start observing testing, the task ideas start flowing. 

Sample Test Guide Contents:

  • Study logistics (times, locations, number of participants, session duration, incentives, moderator)
  • Target participant criteria
  • Study goals
  • Study questions
  • Tasks

Top 10 tips for Flexible testing

To get the most out of flexible testing, we have provided some advice focused on the most critical aspects of planning, facilitating and management.

1. Define testing goals and intentions first

You need to determine two things:

  • Why the client is sponsoring the study.
  • What they intend on doing with the results.

It is imperative that the client answers both these questions honestly and that they are engaged in a conversation about how these goals and intentions will inform the research method selection and output. Of all your discussions, this is the one that will bear the most significant impact on your ability to meet expectations.

Matching Client Goals/Intentions with Testing Methods

 

Summative Testing

Flexible Testing

Generate statistical evidence

x

 

Convince executives to make changes/allocate budget

x

x

Inform design strategy

 

x

Guide design specifications

 

x

Inform design changes

 

x

Identify/prioritize issues with existing design and content

x

x

Identify good characteristics of existing site

x

x

Acquire a “safe feeling” of signing off on an exact, unchanging test plan before the research starts

x

 

2. Generate very specific questions the study must answer

The easiest way to come up with tasks to test is to start with the questions the study should be answering. I used to focus on asking clients to tell me their “mission critical tasks”, but that can limit your ability to come up with tasks that actually touch and test attributes that clients believe are problematic.
After goals and intentions are defined, ask the client to give you a list of questions they want answered in this study. You can use some of these sample questions to get the ideas flowing:

Potential Questions to be answered in Usability Testing

(Web site/Application)

Navigation

Do people recognize the global navigation as navigation?

Do people use local navigation in [location]?

Do people use the navigation system to understand where they are?

Product/Content Findability

Does site search yield good relevant results?

Do overview pages effectively route users down a path?

Do people use filters? Under what circumstances?

Product information

What information do people fixate on?

What information do they have trouble finding?

What information do they want to see?

What is information is confusing?

Do users use related links?

Do users seek reviews?

Checkout

Can people easily find a way to checkout?

Are they nervous about any required/requested information?

Are there any issues filling out forms?

3. Design tasks to answer questions

The best way to ensure clients get what they need is to create testing tasks that expose the qualities of the site they are interested in evaluating. This doesn’t mean that you have to create a task for each question. It’s likely that typical user task scenarios will address several questions each and may overlap as well – and that’s OK.

4. Get as many client observers as possible

Testing is always better when observed by as many stakeholders, creators, and controllers as possible. But this is even truer when the tasks are intended to change. More brains lead to more ideas, which lead to more movement.

5. Incorporate competitive offerings into the test plan

If possible, you want to include some tasks to be performed on competitive sites so there is context. Also, this tends to be one of the most controversial topics in any organization along the lines of “So-and-so does it this way and it’s better.” The first benefit of observing competitors in any usability study is that it can prove/disprove those statements. The second benefit, specific to flexible testing, is that observing people using environments beyond your own can generate all sorts of additional task ideas.

6. Use pre-task questions to inform task improvisation

Define a series of questions to ask at the beginning of each session to better understand your participants.
Questions can focus on:

  • Needs: What are they seeking?
  • Experience: How do they use particular information/content/products?
  • Intentions: Why do they use particular information/content/products?
  • Knowledge: What do they know about this particular topic?

7. Physically separate the client team from moderator/participant

You want your team to be reacting and discussing together, so they can suggest task changes and formulate questions for the moderator between sessions. If they are in the same room as the participant, team interaction is limited to facial expressions and maybe passing notes (which, by the way, is totally noticeable to the participant and can be unnerving). It’s best to keep your client team in a separate room from the facilitation. This applies to traditional “lab” sessions as well as remote-user sessions where, ideally, the facilitator should be alone in the room.

8. Define ground rules for real-time participation so clients know how to help instead of hurt

Inform your client team of what is happening and how they can optimize the process for their benefit:

  • Tell them this testing is flexible – it’s intended to change.
  • Tell them why: Because as you watch the testing, you may get additional task ideas or become curious about other features or content that may be suitable to a particular participant (you know your site best!).
  • Do not allow task suggestions/changes in the first session – ask the team to watch it without real-time feedback.
  • Reserve a generous amount of time (at least one hour) to debrief after the first session: Collect feedback and make test plan changes.
  • Have the team elect one representative who will be responsible for sending you real-time task suggestions starting with the second session.
  • Tell them what is useful and what is not:
    • Do not send requests to ask for user opinion (“Do you like this logo?”).
    • Do send ideas about products you have that this user may find useful (“This user said she likes to golf, we sell a line of golf shoes – maybe we can see if she realizes that and then see if she’s able to find a pair that meets her needs?”).
    • Do send ideas about features/content that might suit a user based on their pre-test answers or behavior (“This user seems really fixated on peer reviews; any way to see how they react to our Q&A content?”).

9. Make sure users are minimally disturbed by the conversation between moderator and client

If you are using conference calling technology to broadcast and/or record your sessions (e.g., GoTo Meeting, Webex), do not also use that channel for the client team to talk with you. Cell phone texting (on mute) is less disruptive, so the user does not lose their train of thought or see information intended only for the research and/or client team.

10. Debrief and prep between each session

Between each session, reserve enough time to chat with the client team, get feedback on the facilitation and information derived, and make changes to the tasks.

  • Review each task and ask for edits.
  • Ask if they want to retire any tasks: This can happen when a team has seen enough  people failing on the same task. If they saw it two or three times, they’d rather use that time to expand the scope.
  • Quickly outline the tasks for the next participant.
  • Make your task sheets (if facilitating in person): If you have access to a printer, then type these up, but it’s fine to use pen and paper when pressed for time.

Getting the Most from Flexible Testing

Using a flexible testing method can help you observe a greater range of site features because tasks are allowed to change as client questions are answered and as tasks are adjusted or created to better suit individual participants. This method works well for organizations who are less concerned with generating metrics from usability testing and more interested in immediate action items.

Ideally, use experienced usability facilitators that can accommodate this constantly changing game plan and welcome the challenge. Experience helps, because there's no time to pilot-test the changes, so you need to get them right the first time. However, inexperienced staff can use a modified version of flexible usability testing where they treat the entire study as an extended exercise in iterative pilot testing and gradually make their tasks and test methodology more and more appropriate. Not as good, of course, but then the only way anybody gets to be an experienced usability expert is to start off with no experience and gradually get better. Flexible testing allows you to get better faster.


Share this article: Twitter | LinkedIn | Google+ | Email