Tuesday, January 26, 2010

Wearable EOG goggles: eye-based interaction in everyday environments

COMMENT
1.
2.

SUMMARY
The paper presents an embedded eye tracker which relies on Electroculography (EOG) for tracking. The eyes are the origin of a steady electric potential field. Movement of eyes from the center towards the periphery results in a change in this potential. Hence conversely, if we study the change in this potential, position of the eye ball can be determined. The system described in this paper consists of goggles with the following devices connected:-
1. 4 electrodes are arranged near the left eye. These two pairs of electrodes sense the change in potential due to the movement of the eye and send the signals to the EOG processor.
2. A fifth electrode mounted above the right eye provides the reference signal.
3. A pocket worn component for EOG signal processing produces the horizontal and vertical movement of the eyes which can be transmitted to a computer using bluetooth.
Eye gesture recognition: Following steps are involved in the recognition-
1. Blink detection and removal: Blinks are detected using a template matching approach using a template created manually from example blinks from different persons.
2. Saccade detection: This is done using the Continuous Wavelet transform.
3. Eye gesture recognition: Each gesture is encoded in basic directions first which are L, R, U and D (left, right, up, down). In the second step diagonal eye movements are combined which are represented as 1, 3, 7 and 9. To recognize gestures this gesture string is then compared with known templates.
To evaluate the EOG system, a game with 8 different levels was used. Subjects had to repeatedly perform eye gestures as accurately and as fast as possible. To reach a high score wrong eye movements had to me minimized. The authors found that EOG is a robust modality of HCI applications. However 30% of the subjects reported getting fatigued.

DISCUSSION
Producing unnatural gestures with eyes seems to me a not very good mode of interaction. The visual channel is already overloaded with many tasks - gazing on the screen, typing, reading information. Adding another voluntary task would result in fatigue and confusion in my opinion. Also I do not see the application for the kind of gestures described in the paper. The authors have not motivated their paper in that area. The user study was lacking in that it does not say much about how easy users found to perform gestures, how accurate the system was etc.

No comments:

Post a Comment