Subscribe

Bad habits slip into eye-tracking tests

Tessa Reed
By Tessa Reed, Journalist
Johannesburg, 03 Apr 2012

The practice of tracking users' eye movements for market and user experience research is more common than ever thanks to more affordable and easier-to-use equipment.

This is according to Aga Bojko, associate director at User Centric, a global user experience research firm.

According to User Centric, eye tracking can be used on a variety of products, including commercial and informational Web sites, advertising, and product packaging, to determine what attracts attention and what gets ignored. The company says eye tracking studies are done by using infrared technology that can register where people are looking as well as the size of their pupils.

Speaking last week at the UX Masterclass, a biannual user experience event hosted by the UXalliance, Bojko warned, however, that the increased use of eye tracking has opened the door to poor research procedures. Bojko explained that while there has been a rapid increase in the number of people practising eye tracking, the knowledge and understanding of the proper use of this new method has not spread as quickly.

Bojko discussed the limitations associated with computer-simulated heatmaps and Web cam eye tracking, and refuted the “rule of 30 participants.”

Virtual vs. real

Bojko explains that while companies offering computer-generated heatmaps, or "participant-free eye tracking", claim a 75% to 90% correlation with real eye tracking data, her research does not support this claim.

According to Bojko, these services let users upload an image, such as a screenshot of a Web page. The user is then provided with an image, usually a heatmap, showing a computer prediction of where people would look within the first five seconds of seeing the image.

However, Bojko says her own studies showed marked differences between computer-generated heatmaps and heatmaps from actual eye tracking tests. For example, she says, the eye tracking simulation of the eBay homepage predicted a lot of attention on images (including advertising). Bojko says her study, using actual participants, showed that participants barely even looked at these images, and focused on the navigation and search.

No magic number

Bojko also argues that there is no “magic number” when selecting participants for an eye tracking study, despite some researchers claiming that the ideal sample size is 30. Bojko explains that the number of participants needed for an eye tracking study is dependent on a number of factors, including the type of research, study design, measures used, and desired confidence level.

For example, she says, when comparing two products, researchers need about four times more participants if each product is tested with a different group of participants than if each study participant gets to interact with both products.

According to Bojko, sample size is also often affected by the available resources - researchers have to make do with coming as close to the desired number of participants as budget allows.

Xhaed = Unreliable Web cam results

Bojko also believes that while Web cam eye tracking used for remote user testing might, in the future, be a cost-effective alternative to lab eye tracking, its current accuracy is very low compared to infrared light-based eye trackers. According to Bojko, this is partially due to the limitations of the technology itself and partially because the researcher is unable to make sure the participant does not move or get distracted by things in the environment.

Share