Quantitative studies (such as surveys, quantitative usability testing, and analytics experiments) can offer valuable insights for product development. However, it is crucial to carefully design quantitative studies to ensure accuracy and reliability.
UX teams often rely on quantitative research to help them make important decisions. Basing those decisions on faulty data can lead to serious design mistakes and misallocated resources.
(Quantitative UX research involves lots of terminology and jargon, so consider referring to this quantitative-research glossary for quick definitions.)
Confounding Variable: Definition
Definition: A confounding variable is an unmeasured variable that may unintentionally affect the outcome of a research study.
Typically, UX researchers designate an independent variable and one or more dependent variables for their quantitative studies.
Independent variables are study conditions that are manipulated by researchers. For example, a team might run a quantitative study comparing two different versions of a design. In this case, the design change is the independent variable.
Dependent variables are outcomes that are measured from a quantitative study. Researchers typically expect the values of dependent variables to change based on their independent variables. For example, researchers might measure user satisfaction to see which version of a design leads to higher satisfaction ratings. Put another way, UX researchers typically expect changes to an independent variable to influence dependent variables.
A confounding variable can affect both the independent variable (the conditions you change) and the dependent variable (the outcomes you measure), causing unexpected results in the study.
For example, a confounding variable could:
- Reduce the expected influence of a design change
- Reverse the outcome of a task-success metric
- Eliminate and change an effect of an independent variable
An Example of a Confounding Variable
A research team ran a within-subjects quantitative usability study to test a design change, which means that each participant would test both designs. (If the study were a between-subjects design, each participant would be asked to interact with only one design.)
Design A was tested in the morning with a group of participants, followed by a lunch break. The participants returned to test design B in the afternoon.
After analyzing the data, the team found that design B had a higher task-completion time than design A — in other words, participants completed tasks more slowly with design B in the afternoon than with design A in the morning.
Lower task-completion times are usually associated with better usability, but is that the case in this situation? Does design A realistically have better usability than design B?
Several confounding variables were present in this study; these variables could muddy the reliability of the results:
- Participants who tested design B had previous experience with the product, from the morning session.
- Lunch may have made participants less energized.
- Participants may be tired late in the day.
Some of these confounding variables may add contradictory and conflicting effects: In the afternoon, participants might do better due to experience but could also perform worse due to a post-lunch slump. Confounding variables can make it difficult to predict the result of a study.
To avoid the time-of-day and learning effects introduced in this study, participants should be randomly assigned to either design A or design B in the morning, with the other design in the afternoon. Randomizing the order in which participants are exposed to the study conditions will reduce the influence of time of day, depleted energy, experience, or hunger.
More Examples of Confounding Variables
Confounding Variable |
Description |
Solution |
|
Age effects |
The age of participants can affect many UX-related variables, including satisfaction, time on task, task success rate, and reading ability. |
Recruit a representative sample of participant ages for your study. Randomize your participants across the conditions of your study, so that ages of participants are equally distributed across conditions. Record the ages of your participants for later analysis, if needed. |
|
Seasonal effects |
Participants may behave differently depending on the current season. Consumers’ habits drastically shift around holidays (Chinese New Year, American Winter Holidays, etc.). Comparing a Q4 usability study to a Q1 study may be affected by confounding variables. |
If you are conducting a study over long periods of time or comparing results from different studies, consider the time of year at which the studies were completed. Ideally, compare data collected over similar time periods. |
|
COVID-19 pandemic |
The coronavirus pandemic has changed how people interact with products; data collected before and after the pandemic may be affected by the pandemic. |
Exclude date ranges during the most acute period of the pandemic by identifying outlier data. |
|
Competitive market shifts |
If a competitor has a large sale or introduces an entirely new product right before a design change, user behavior may be affected by the competitor’s actions, preventing you from assessing the effect of your design change. |
Structure your studies to avoid major market disruptions. |
|
Existing product experience |
When comparing metrics like task-completion time for two different products, participants’ prior experience with the products can skew the results into one direction or another.. |
Recruit a representative sample of participant experience levels or exclude participants with extensive experience with your products (if applicable). |
|
Existing product opinions |
Users may have preexisting viewpoints about the products being tested, influencing usability or survey outcomes. When comparing user satisfaction with two products, prior opinions about the two products could bias the results. |
During recruitment, screen for extreme positive or negative product sentiments and either control for or exclude those participants. |
Why Are Confounding Variables Important?
A quantitative study can be an investment of significant time and money. You must have confidence that your study will be reliable, and its results will be valid. Studies should be constructed such that they could be repeated, with an expectation of the same result.
When a research study has low bias and a high level of repeatability and control, it has high internal validity — in other words, a study is internally valid if it does not bias a participant towards any specific answer or action.
If your research study has significant confounding variables, then the conclusions from that study may be wrong. Making decisions based on these misguided conclusions can result in significant loss of time and money for organizations.
Best Practices for Avoid Confounding Variables
- Use within-subject study designs when possible. Counterbalance or randomize the order in which participants are exposed to the different conditions in your study. For example, if they are testing two designs, randomly decide the first one tested by each participant. Within-subjects designs reduce sources of error and naturally counterbalance experimental conditions.
- Randomly assign condition groups for between-subjects study designs. For example, randomly decide which design should be seen by a participant.
- Carefully consider possible confounding variables of an upcoming study; structure the study to avoid them or measure them to control for them later. For example, if you know that age may be a factor for task completion, carefully collect all your participants' ages and control for the effect of that variable when performing statistical analyses such as regression in your study.
- Keep your testing environments, personnel, and protocols consistent throughout a study. Changing a study’s testing conditions between experiments or conditions (such as testing a different design in a different room) can unintentionally influence the outcomes of a study.
- If it proves too difficult to estimate confounding variables or effectively control the potential variables in a quantitative study, consider conducting a qualitative study instead. In UX research, quantitative studies are not typically meant for exploratory research.
- Focus, clarify, and justify your research hypotheses before conducting your study.