MacNears

Team Name

MacNears

Timeline

Fall 2019 – Spring 2020

Students

  • Paras Pathak
  • Ramesh Sitaula
  • Shivangi Vyas
  • Sandhya Sharma
  • Lewis Shemery
  • Yiyambaze Nkhoma

Abstract

The sole inspiration for this project is to help new drivers to learn new traffic signs as well as improve on currently available object detection technology for iOS. As road-accidents are a major cause of death in the United States, our app can be used in older vehicles without any advanced safety features, to detect traffic signs and in turn helping minimize accidents. Furthermore, our app’s core technology can be improved upon and integrated onto cars to achieve full autonomous capabilities. This application can also be further improved upon to recognize traffic signals on top of traffic signs.

Background

Driving, while it may seem simple, is a very involved task that requires an extremely high level of attention to be paid to do it safely and effectively. Nowadays with so many distractions around us, it is extremely beneficial to have technology that can assist and alert drivers of upcoming signs they might have missed. The goal of this product is not to encourage inattentiveness but to provide what could be seen as another set of eyes able to see and alert drivers of critical things they may be unaware of. This application is intended to help driver successfully detect traffic signs that are often overlooked. It is also intended to serve as a core model for autonomous vehicles and help the system accurately and precisely detect traffic signs in real-time.

Project Requirements

  1. Application should be able to access live camera
  2. Weather conditions should be clear enough for camera to capture video without blurriness.
  3. Our app should be able to detect traffic signs that the model has been trained on.
  4. Our app should be able to find traffic sign in live feed fast enough to be usable while driving
  5. Stability of the phone, to allow clear video frame
  6. Predict traffic signs with their correct labels
  7. Bounding box showing the area in the live video feed where the traffic sign has been found
  8. Confidence, as a percentage, of the traffic sign our application predicted.
  9. Latency in ms of the image processing by our application
  10. Ask user for permission to capture live video and analyze it

System Overview

Our app is divided into three major layers with important work being handled by each layer. The first layer is the object detection layer. This layer takes live camera input from the phone and then uses that to see if they detect any object that appears to look like traffic signs. If the object finds any such information, it passes the buffer of the video to the second layer. The second layer is object recognition layer, it processes the output of the first layer and checks if the sign is a traffic sign or not by passing through a Convolutional Neural Network built using TensorFlow. If it finds any sign, then it sends the coordinate of the location of the traffic sign to the third layer. Then finally the GUI layer that notifies the user of the location of the sign by creating a bounding box around the traffic sign.

Results

Future Work

Train the ML model using American Traffic Sign data set instead of using the German Traffic Sign database, also add the feature of voice whenever a traffic sign is detected.

Project Files

Project Charter

System Requirements Specification

Architectural Design Specification

Detailed Design Specification

Poster

References

N/A

gustijl