Seeing what I’ve seen through your customers’ eyes

With apologies to Blade Runner purists everywhere, sharing the benefits of eyetracking with as many people as possible seemed worth risking a bad pun.

While no magic bullet, bringing the benefits of objective data to bear on the web design and development process has helped us understand more about the needs of our clients’ customers, and produce better work.

When I try to explain why eyetracking is different from typical usability testing, I often struggle to balance my enthusiasm for the technique with an honest accounting of its limitations. Make no mistake: eyetracking cannot peer into the souls of your customers and read their minds. (Though research indicates it may be able tell if you are lying, that’s rarely what we’re probing for when we subject our work to evaluation.) But an experienced researcher can use eye gaze data to uncover things which users won’t – or, more often, can’t – tell you on their own.

If you want useful answers, you must ask the right questions

Sometimes, usability tests are constructed to answer questions which market research might be better equipped to answer. Consider, for example, a seemingly simple question such as, “which design do you prefer?”

It’s a loaded question. Preference can be determined along a number of axes: is the design pleasing? Is it helpful? Clear? Interesting? Besides being vague, usually the question is self serving. What the inquisitor is really asking is, “which of my designs do you like better?”

When comparing multiple design options for a website, look at objective measures: which design makes it easier to find pricing for a product? Which design results in a better understanding of the business’ core purpose? What design elements are associated most strongly with an increase in an audience’s ability to complete a task successfully?

In the context of questions such as these, eyetracking data can provide some useful insight: Among the evaluation participants who finished a task quickly, which ones saw a specific design element? Was careful inspection of a design element associated with increased or decreased clarity in understanding the site’s purpose? Did the color of the site’s main navigation, or primary call to action, noticeably affect how quickly people saw it, or whether they understood its purpose?

True insight comes from self-examination

One of the most fascinating experiences I’ve had conducting eyetracking usability sessions is the Retrospective Talk Aloud. Based on a common usability testing technique, where participants are encouraged to self-narrate their experiences, (“I’m clicking on the ‘buy it now’ button…. oh, it put it in my shopping cart.”) the goal is to elicit clues as to what the person expected to have happen, and whether or not actual events mapped to their expectations.

Two particular elements of human behavior make this technique frustratingly difficult to use: humans are terrible multi-taskers, and we are awfully biased in our observations.

(You may argue with both points, but research seems to bear out that multitaskers perform multiple tasks poorly, and we tend to describe our behavior in the best possible light.)

The Retrospective Talk Aloud helps to attenuate both tendencies, at least to a degree. In this technique, participants are shown a recording of their eyetracking session, and can follow along as they observe how their own eyes were recorded moving around an interface. During the initial recording, participants are free to “sink in” to the task at hand, and are not distracted with the responsibility of verbally reporting each action they take.

In reviewing their recording after the fact, the participant is freed from the cognitive load of having to remember what they looked at. In addition, as we ask them to report on why they took certain actions, their answers can be based on an objective observation of what they actually did. The end result is a more thoughtful, and accurate, recollection of the participant’s experience.

In general, we are all individuals

Variations between test participants tend to get “smoothed out” when eye gaze data is aggregated, which is one of the benefits of eyetracking: it allows you to see not through one customer’s eyes, but through the eyes of many at once. Typical usability testing relies on the assumption that what two or three users have difficulty with will likely impact many other people as well. As eyetracking research allows you to observe data across a larger audience, you gain the benefit of  objective performance data to use as a balance against more subjective, aesthetic design judgments.

Overall, the quality of your end product is enhanced by this combination.

Leave a Reply

Close Modal

Contact Archer

Close Modal

We know you're still using an older version of Internet Explorer. Do you know how much mind-blowing stuff you're missing because of this?

Save yourself. Upgrade now.
We'll help you...