“We are not our users” is a phrase that UX practitioners and design teams are shouting from the rooftops. And for good reason; understanding that users are complex and different is a critical piece of creating products that satisfy and delight. Since we are not our users, it’s imperative that we consult them early and often, and integrate their data into product strategy, design, and development. Doing so increases the likelihood that the products we build will satisfy real user needs, resulting in them being used, enjoyed, and, hopefully, treasured.
There are countless UX research methods available to collect data on users’ perceptions of and reactions to the products that we create. Researchers can conduct interviews and focus groups to understand user needs, tasks, and motivations. We can generate surveys that diagram users’ mental models and subject matter knowledge. We can run in-person and/or remote user testing studies with products that are developed. B ut what method(s) can researchers use to validate concepts or ideas that don’t exist yet? Often it is unclear if a hypothetical product will appeal to potential users, and validation testing can provide critical insight into future product direction. Unfortunately, most common research methods limit the amount of testing that can be conducted with a product that isn’t fully prototyped or developed. Researchers can conduct interviews and ask questions like “how useful would “x” be to you?” but this data doesn’t provide insight into if and how users will actually interact with a hypothetical product.
This is where the Wizard of Oz (WOz) research method comes in. John F. Kelley introduced WOz testing to the Human-Computer Interaction discipline in 1984; WOz testing involves having a human (or team of humans) act as “the man behind the curtain” (like in the Wizard of Oz movie) simulating the functionality of a fully built product. Users who participate in a WOz test are unaware that a “wizard” is behind a metaphorical curtain pulling the strings that make the product come to life. This type of test allows researchers to collect data on a product while bypassing development time and costs. Accordingly, WOz testing is particularly beneficial when a fully built product doesn’t exist but human-product interaction is warranted, like early concept validation and/or usability testing features.
An additional benefit to WOz testing is that it allows researchers to collect behavioral data on how potential users will actually interact with a product, rather than simply collecting data on how users might intend to or imagine interacting with a hypothetical product. As a result, researchers who use a WOz method, can be increasingly confident that the data they collect will generalize to other situations and other potential users (external validity).
WOz testing can answer research questions like: Will people actually use this feature/product? If yes, how will people use it? Will people see this feature? and is this feature/product usable?
How researchers implement the WOz method will depend on the type of product being tested, the goal and structure of the test, and the fidelity of the prototype, which will dictate the level of involvement required of the “wizard.” The implementation includes producing a prototype of a hypothetical product in question that has the core functions necessary for testing, training a “wizard” to simulate specific actions, identifying relevant behavioral data to collect, and building in ways to collect that data.
A Case Study
How does this work in practice? Let’s look at a real-life example from a recent client project here at WillowTree!
In August 2017, a client presented our research team with a concept validation question: will this mobile app concept effectively increase engagement with a physical wellness product? Given the mobile app concept was just that—a concept—this created a new and unique testing challenge. How do you test the efficacy of a product that isn’t built yet? The answer: You Wizard of Oz it!
The product designer on our team generated a series of concept prototypes using InVision. These prototypes had the general look and feel of the hypothetical app in question, and contained the primary proposed feature set. Accordingly, users could complete a set of core actions—the set of actions we were interested in evaluating—but additional interaction was limited. Participants that received the mobile app concept downloaded the InVision link on their personal device and were instructed to use it as they normally would if they had just downloaded the app from the app store. Each participant had a unique InVision prototype that was updated to reflect their customized usage. Our product designer played the role of the wizard: updating each individual prototype every evening to simulate a fully built app that could record and track users’ progress. This execution is a slight modification from the way WOz is traditionally implemented. Typically the “wizard” performs every action manually, but given our testing parameters (number of participants, complexity of interactions, and duration of the test), we offloaded actions where possible to the prototype.
Across a two week period, we collected data on our participants’ usage of the physical product and their interactions with the prototype. The primary metrics we were interested in capturing involved app usage and the usage of the physical product. Every evening we sent the participants a link to a survey with one question: did you use the physical product today? We used this data to update each individual’s prototype to reflect their usage each day (see above section).
We also put this data to work for research analyses. We visualized their usage, pushed them customized content and updates, and even provided rewarding animations when usage milestones were achieved, just as a fully built app might.
The outcome was that the group of participants who received the “mobile app” were significantly more likely to use the physical product than a group who didn’t receive the “app.” Furthermore, participants who received the “app” were more excited and informed about using the physical product.
All of these insights were obtained as a result of a creative research design and the implementation of a Wizard of Oz method. In this case, using WoZ allowed us to capture information about how actually using a hypothetical product would influence behavior, while saving the time and cost associated with building an app specifically for testing. The WOz method gave our client valuable information about how a hypothetical concept could influence the future of their business and their users’ behavior with minimal cost associated with development efforts.