Eye Tracking Demo: SVGs as AOIs

Overview

In this demo,open in new window our goal is to capture eye tracking gaze data. Specifically, we want to capture the gaze data associated with specific Areas of Interest (AOIs) which are included in the study as SVG shape objects.

A preview of the demo in action is shown below with the red circle providing feedback by indicating the location of the gaze in real time:

Preview of the gaze location in a webcam-based eye tracking experiment.

  • NOTE: The demo recording was accomplished with a 5.5 minute calibration option; the red circle is an object that represents the participants gaze in real time and the overall set up is explained further below.

Take a look at this quick video that shows the demo in action, as well as a preview of the data recorded at the end:

Objects

This demo includes a background image of a pier on which 4 SVG objects of people are placed upon. There is also a gaze feedback object (red circle). For the purposes of this demo, we wanted to represent gaze in real time and this red circle reflects that.

The objects placed in the Labvanced editor

Custom Variables Created

In order to record data, variables are required. A list of all the custom variables that were created for this demo are shown below, accessible via the ‘Variables’ Tab. The image below shows the details and settings for the variables that this demo makes use of:

Custom variables created for this webcam-based eye tracking study.

Below is an explanation of the variables and their purpose:

Variable NameDescriptionRecord Type
All Gaze DataUsed to store the array of x/y coordinates [X,Y], as well as the Time Capture [T] and the confidence level [C] of the measurement.All changes / time series
AOI1_gaze_dataUsed to store the array of [X,Y,T,C] when the participant looks at the SVG object named AOI1.All changes / time series
SVG1_pathRecords the path of the SVG node that the gaze rested upon of AOI1 (explained further below).All changes / time series
AOI2_gaze_dataUsed to store the array of [X,Y,T,C] when the participant looks at the SVG object named AOI2.All changes / time series
SVG2_pathRecords the path of the SVG node that the gaze rested upon of AOI2.All changes / time series

And so on….

Events Set Up

Since we are interested in capturing gaze data, the following events are used:

  1. All Gaze Data: This event initiates as soon as the task begins and records eye tracking gaze data continuously.
  2. AOI Gaze Data: This event initiates as soon as the participant looks at a specific AOI (ie. one of the four people) and records the gaze-related values.
  3. Finish Experiment: An event that accepts / ends the session and records the data upon the subject clicking the ‘Finish’ button.

Event 1: All Gaze Data

For the first event, we want to accomplish the following:

  • record all eye tracking data throughout the duration of the study
  • display the detected location of the gaze on the screen in real time (so you as the researcher can have a sense of the webcam tracking when trying out the demo)

Trigger

Thus, we use an eye tracking gaze trigger to initiate this event.

selecting the Eyetracking gaze trigger option.

Action

Once gaze is detected, the following actions will occur: First, we call on the All Gaze Data variable that we created earlier in a Set / Record Variable action in order to record the data as [X,Y,T,C] array.

Recording the data array and setting the eye tracking values

The above is accomplished (as shown below) by clicking the green button, selecting the target variable, then clicking the pencil icon value-select menu and then choosing the [X,Y,T,C] array from the trigger-specific (Eyetracking Gaze) menu, as shown below:

Selecting which eye tracking data values should be recorded

We also add the Set Object Property action in order to set the red circle object (called ‘gaze’) to set its X property equal to the and the Y property set to the coordinate X and the coordinate Y, respectively. In other words, we set the object’s x- and y-values to be equal to the x- and y-coordinates… ultimately, this is what makes the object move in real-time.

Setting the object property to take on the gaze coordinates in order to provide visual feedback of where the gaze is in real-time.

NOTE: the options for Coordinate X and Coordinate Y are selected from the trigger-specific (Eyetracking Gaze) menu.

Event 2: AOI Gaze Data

In this event, we want to set up the events such that:

  • gaze is also recorded when the gaze is specifically set on the target AOIs (ie. the SVG objects we have uploaded)
  • the specific SVG node data of the AOI is reported

Trigger

The Eyetracking Gaze trigger is also used as the trigger here. But in this context, we indicate that we are only interested in specific elements by clicking the respective option and then selecting the 4 SVG objects which are acting as Areas of Interest (AOIs).

Specifying the target Areas of Interest (AOIs) for the webcam-based eye tracking experiment.

Action

To set the call on each AOI, we need a Control ActionRequirement Action (If…Then). Essentially we want Labvanced to do the following…. “If the participant is gazing at AOI1, then we want to record the specific data for it in a specific variable, as well as the SVG path of that gaze”

First we click on the + Requirement button and the two pen icon values will appear.

  • First pen icon: For the first field we select, from the trigger-specific (Eyetracking Gaze) menu, the Stimulus Name option which uses the object name of the stimulus that the trigger (in this case gaze) was on.
  • Second pen icon: Select Constant Value and then select the String option and type in the object name (ie. AOI1):

Setting the stimulus in an if/then event to be AOI1

So up to this point we have established that when the trigger (gaze) is on the stimulus named AOI1... then:

Specifying what actions occur when the gaze is on AOI1.

  • The variable AOI_gaze_data that we created earlier will record the [X,Y,T,C] array.
  • Also the SVG1_path variable will be recorded to contain the Stimulus Info which, in this case, since target of interest (AOI1) is an SVG, the stimulus info will contain the nodes of the SVG.

To specify the next AOI, the +Add Else If Case button in the dialog box and and then just repeat the structure / set up as it was shown above but with the main differences being setting the Stimulus Name as AOI2 and then also call the specific variables for storing the data, ie. AOI2_gaze_data and SVG2_Path as shown below:

Specifying that when the gaze is on AOI2 then the gaze array coordinates and the SVG path are recorded.

Next, for AOI3, we choose to Add Else If Case again and follow the same structure but with the custom variables we have created to store the data for this specific AOI:

Specifying that when the gaze is on AOI3 then the gaze array coordinates and the SVG path are recorded.

Again, for AOI4, we choose to Add Else If Case again and follow the same structure but with the custom variables we have created to store the data for this specific AOI:

Specifying that when the gaze is on AOI4 then the gaze array coordinates and the SVG path are recorded.

Event 3: Finish Experiment / Save Data

Lastly, for the data to be recorded and stored, we need to choose the Jump Action to Accept / End Session action once the Finish button is clicked on (which is the trigger), as shown below.

Trigger

The trigger is specified to occur when the ‘Finish Button’ is used:

Selecting a button click as the trigger.

Action

The action that will occur is that the session will be accepted which, as a result, means that the data will be recorded.

Selecting the accept/end session as the action

Let’s take a look at how the data recorded will look like in the next section.

Data Recorded

The data recorded includes all of the custom variables plus experiment-specific values like the task number, the session number, etc.

In the example below, the Dataview & Export tab shows what the data looks like when it is separated by time series files. Each time series variable (ie. the variables that record all changes) is shown with its own CSV file on the left-panel below. This can be accomplished by specifying the relevant setting utilizing the Export Format settings.

When the All Gaze Data.csv is selected, numerous variables are recorded including those shown below. The ‘value’ column captures the [X,Y,T,C] array where the values are separated by commas:

Preview of all the gaze eye tracking data

The image below shows a preview of the arrays that were recorded specifically when the gaze was on AOI1, as shown by the ‘value’ column:

Preview of the gaze eye tracking data that was recorded for AOI1

The image below shows a preview of the SVG paths that were recorded specifically when the gaze was on AOI1, as shown by the ‘value’ column:

Preview of the gaze eye tracking data that was recorded for the SVG path

Conclusion

This demo aims to show how to record gaze data and utilize SVG objects to record gaze specifically in the context of when gaze occurs on an Area of Interest (AOI).

If you have any questions, please reach out to us and let us know about the details of your experiment, especially if you need to conduct a feasibility check!