Team Name
Wheelchair Vision
Timeline
Fall 2018 – Spring 2019
Students
- Rauhaan Aamir
- Aninda Aminuzzaman
- Jason Do
- Quinton Helton
- David Jaime
Sponsor
Dr. Shiakolas from the Mechanical Engineering MARS Lab
Abstract
As of now, the prosthetic arms available to amputees are of very high cost. Furthermore, even though the prosthetic arms that are currently available to amputees in the market are commanding such high prices, many of them possess little to no functional properties at all. This project aims to provide vision technology to already developed robotic prosthetic arms to give them added utility. It will improve the livelihoods of the people who are dependent on such prosthetic arms by providing low cost alternatives to existing ones.
Computer vision technology and tools will primarily be used to achieve the goal, such as image processing, machine learning tools, segmentation, and object recognition. The software will process the image data captured by the camera. After the data is processed, the software will be able to identify particular objects on a surface. The surface may a table. The software will also be able to provide relative distances between particular objects. The software will also be able to provide the distance between any particular object and the user.
Background
The prosthetic arms that are available now are very expensive. A simple cosmetic prosthetic arm can cost up to $15,000, while a robotic, functional prosthetic arm can cost upwards of $40,000. This project’s sponsor, Dr. Shiakolas, has a vision of providing affordable robotic assistance for people that need them. He is the director of the Manufacturing Automation and Robotic Systems (MARS) Lab at UTA. In the lab, robot arms are being developed to one day replace the existing prosthetic arms patients are purchasing at such high prices. These robotic prosthetic arms will not only be of low cost, they will also be able to provide the patient with some functionalities of a normal arm. This project provides the image processing tools and data needed by such a robotic arm.
Figure 1. Dr. Shiakolas and a former student displaying their work on another robotic arm project
Project Requirements
Identify Objects
Determine distance of object from camera
Output the object and distance info to be used in robotic arm movement
System Overview
The main components are separated and split into highly independent layers. The camera is mounted strategically on the wheelchair, so that clear pictures can be sent as raw data to the object recognition layer. This layer consists of the software that uses extensive analysis to identify the image. This layer communicates with the distance measurement layer to calculate the physical parameters needed to control the robotic arm. The graphical user interface shows a live video with the image in target to show which object will be lifted. It also shows a text display showing different physical values. The data transfer layer takes the inputs and outputs it in a useful format for the robotic arm.
Results
The system was prototyped and tested incrementally. After determining the necessary parts of the system, we developed each subsystem individually. After implementing the subsystem and integrating it with the entire system, we informally tested the output against our expectations and specifications. After the full system became functional, we prototyped it and demonstrated it to our instructor and the related MAE Senior Design team (who are working on the actual robotic arm), which gave us additional comments we could use to improve and finalize the system.
We seem to have met our basic success criteria: identify objects in front of a user, determine distances from the user to the object, and provide that data to a robotic arm. We have also satisfied the goal of creating a system that is much more affordable than other current options.
Future Work
In the future, this system should be improved upon and further integrated with the system controlling the robotic arm. It would also be beneficial for the future system to include support for voice commands (for accessibility).
We would like to thank Dr. Panos Shiakolas of the MARS Lab, our client and sponsor for this project. We would also like to thank the College of Engineering for providing additional financial support for the project. Finally, we would like to thank the Computer Science and Engineering department and our Senior Design instructor, Dr. Christopher McMurrough, for their help and support throughout the project.
Project Files
System Requirement Specification (SRS)
Architectural Design Specification (ADS)