A new procedure called Subtle Gaze Direction, which can dramatically alter which parts of an image people look at on a computer screen, has been developed by Ann McNamara, an assistant professor in the College of Architecture’s Department of Visualization at Texas A&M University, and a team of research associates.
The research has possible applications in areas as diverse as computer gaming, online education, training simulations and advertising.
“When people are viewing images, there’s certain things that catch your gaze,” McNamara said. “Human faces, things that appear nearer rather than farther, or areas with high contrast.”
To document where people’s eyes are drawn to in an image, McNamara and her associates use small, infrared cameras calibrated to record the movements of a subject’s pupil; the cameras track a subject’s eye movements when looking at a series of images on the screen.
As an assistant professor of computer science at Saint Louis University in 2006, McNamara began the project with fellow researchers Cindy Grimm, associate professor of computer science at neighboring Washington University in St. Louis, and Reynold Bailey and Nisha Sudersanam, then graduate students at Washington University in St. Louis.
With their eyes tracked by cameras, the team’s research subjects first looked in the expected areas in image after image.
McNamara and her fellow researchers had a question.
“We wanted to see if people’s gazes would actually follow a path we chose for them through the scene,” she said. “For example, if we wanted people to look at one area, we wondered if we could actually force them to look there.”
Putting a flashing red light in the corner of the screen would work, of course, but McNamara and her colleagues had something else in mind.
“If you wanted to do it in a subtle manner, so the user wasn’t aware their eyes were being directed, how could you do that?” she asked. “We came up with the idea of applying image space modulation.”
The technique involves choosing a small region of pixels in the image and basically alternating the pixels between black and white: a tiny flicker in the image’s luminance channel.
“Humans are highly responsive to changes in luminance,” she said.
The flickers, called modulations, are visible to the naked eye — if one knows where they are and when they’re occurring.
“The modulations catch your attention, but only for a brief enough period that your eyes actually move toward it,” she said. “Once you start looking toward it, the modulation stops.”
Using the eye-tracking technology, researchers found a dramatic difference in what part of an image subjects looked at with and without modulations.
Ignoring typical gazing sites such as human faces, the subjects looked at modulated areas in empty sky, the ground, or clothing.
The impact of modulation is based on the characteristics of human vision.
“The reason we see a clear view of the world is because our eyes are always moving,” said McNamara. “If you could actually stop your eyes from moving, then you would see one very clear area in the center of your viewing area and the rest, your peripheral vision, would be blurred.”
McNamara said despite this limitation, humans are very aware of motion in their peripheral vision.
Subtle gaze direction has a number of possible applications, said McNamara.
“Let’s say you were passing geometry over a network into a game,” she said. “You could use SGD to try to force the game player to look at an area that’s well-rendered and away from an area that’s not-so well rendered.”
This technique could save game developers the expense of graphically articulating details on the entire screen, thus improving the game’s performance without an apparent depreciation of the gaming experience.
SGD also has possible education applications.
“If you have a painting on a website that art students are learning from,” said McNamara, “you can have audio that guides them through the painting, but you could also use SGD to have them look at specific regions as you’re talking about them.”
The use of SDG in training simulations could also enhance the learning process.
“In a flight simulator, for example, you could use SGD to direct a user’s gaze to important controls, or if one of the controls was to become critical, you could modulate that and have them look at it,” she said. The technique could also be used for driving simulators, for instance, modulating the rear view mirror and the side mirror areas, to develop a young driver’s habits in a way that could potentially transfer to the real world.
And, of course, businesses have products to sell.
“Advertisers would love to be able to direct where you gaze,” she said.
McNamara, who joined the Aggie faculty in 2008, holds a Ph.D. in Computer Graphics from the University of Bristol, U.K., a Master of Arts in Education from the University of Dublin, Ireland, and a Bachelor of Science in Computer Science from the University of Bristol.
Her research focuses on the advancement of computer graphics and scientific visualization through novel approaches for optimizing an individual’s experience when creating, viewing and interacting with virtual spaces.
- Posted: May 4, 2009 -