Eye Gaze

Eye Gaze

Eye Gaze

Timeline

Fall 2018 – Spring 2019

Students

  • Henry Bang
  • Jonathan Aguilar
  • Khoa Pham

Abstract

Our team is trying to create a tool that will help people with disabilities. The main purpose for the Eye gaze detection project is to help people that have ALS. We need the program to work on objects inside a house. Our tool will detect objects for the user and send a message to a care taker that the person need assistance.

Background

The main people we are trying to help are those with ALS. That disability causes most of the muscles to not work. The eyes are something that will continue to work even if the disability has already done a lot of damage to the person.

Project Requirements

  • Tracks eye movement and maps the gaze vector
  • Headset calibration is used to outline the box for user visibility.
  • The project will detect up to 10 object accurately.
  • The program will detect up to 37 items.
  • We chose speed over accuracy. We feel like the program has enough accuracy for household items. Speed is more of a necessity since the user will be looking at many objects.
  • Retrain more objects for specific things that the user needs.
  • Object name is sent through the network by sockets

System Overview

PERCEPTION LAYER: The perception layer provides consistent data from the inward and outward camera to the eye tracking and image extraction subsystems from vision processing layer. The system will use two HD 1080p webcams to provide clean data to be process. The webcams are to be securely mounted to allow the vision layer to rely on consistent. The webcams is physically connected to a desktop for power and data flow.

VISION PROCESSING LAYER: The vision processing layer is the main layer that contains the software pipeline to process the streaming data coming in from both cameras and calculates the gaze vector. A open source program called Pupil Labs handles all of the eye tracking and gaze mapping.

HARDWARE LAYER: The hardware layer contains the physical hardware needed to output the data from the eye tracking software to the care taker and user. The subsystems included are: a desktop with a GPU, monitor for graphical output information, and speaker for text to speech.

OBJECT DETECTION LAYER: The object detection layer contains the SSD(single shot detector) object detection which is a plugin for pupil labs. SSD detects objects and only places bounding boxes on objects with the gaze vector. The object label is then sent to the client program.

 

Results

 

Project Files

Project Charter: project_charter_latex_latest

System Requirements Specification: system_requirements_specification_latex_1(2)

Architectural Design Specification: ads_4(1)

Detailed Design Specification: detailed_design_specification_latex_sp2019

Poster: Innovation Day – SeniorDesignPosterTemplate (1)

References

Any references go here, properly formatted

chenc4