AutoMav

Team Name

AutoMav

Timeline

Fall 2018 – Spring 2019

Students

  • Dario Ugalde
  • Awet Tesfamariam
  • Andrew Break
  • Amgad Alamin
  • Warren Smith

 

Abstract

The purpose of this project is to design and construct an intelligent ground vehicle. With this vehicle we plan to  compete in the 27th Annual Intelligent Ground Vehicle Competition (IGVC) in June of 2019. By completing this project, the team will experience designing and developing a functional product from scratch while becoming familiar with new technologies along the way. This competition will give the team a chance to showcase their work and demonstrate what the team has learned while studying Computer Science and Computer Engineering at UT Arlington.

Background

The Intelligent Ground Vehicle Competition (IGVC) gives us an opportunity to get design experience that incorporates the latest technology.  Autonomous vehicles have applications in military, industrial, and commercial areas.  The goal of IGVC is for the team to construct autonomous unmanned ground vehicle. The vehicle will be tested on an outdoor course defined by painted lanes and both painted and physical obstacles. The vehicle will be given a GPS location to navigate to and must do so without crossing lane markers or striking any obstacles. Related work regarding this project can be found through previous participants of IGVC.  The first-place winner in the 2017 IGVC competition was IIT Bombay, their design included 2 motor powered wheels with a single caster wheel in the front, with changes in torque on each wheel affecting how it turns. Their vehicle’s sensors included LIDAR, IMC, 2 cameras and a GSP sensor [1]. The second-place winner from that year had two front powered wheels with two caster wheels in the back. Their sensors included a 3D and a 2D LIDAR camera, along with a gyroscope, GPS and omni directional camera [2]. The third-place winner used a compass, GPS, Camera and LIDAR for their sensors. With these sensors, the camera data is used to detect the painted lanes, while LIDAR is used to detect the physical obstacles in order to navigate the course [3].

Project Requirements

The vehicle must maintain contact with the ground

The vehicle must have a mechanical emergency stop button

The vehicle must have a wireless emergency stop function

The vehicle must have a safety light that indicates its current operation

The vehicle must carry a payload weighing 20 lbs and measuring 18” x 8” x 8”

The vehicle must demonstrate lane following

The vehicle must demonstrate obstacle avoidance

The vehicle must utilize way-point navigation

The vehicle must complete the course in under 10 minutes

All computation, sensing and movement control must occur on board the vehicle

System Overview

The AutoMav system is separated into five modules, Central Control, External Sensors, Hardware, Navigation, and Computer Vision. Each module is responsible for a separate group of core functions within the system.

 

Central Control

The central control unit will provide a location for system status information, as well as handling system wide commands, and a platform for all other nodes to interact with the system.

External Sensors

Devices that sense information about the environment. The vehicle will use 3D camera and GPS as external sensors.

Hardware

This subsystem deals with the components of the system that handle information exchange at the physical level. This includes any necessary signal processing, electronics, and motion controls.

Navigation

The navigation subsystem will be responsible for creating a path from its current location to the intended location based on the information it gets from the external sensors and the computer vision systems. The navigation layer will be split up into two distinct nodes: Path Finding and SLAM.

Computer Vision

The computer vision system is responsible for taking images captured by external sensors and recognizing obstructions in the vehicle’s path as well as recognizing lane lines. Obstacle recognition will utilize a 3D camera that provides depth information to detect objects in front of the vehicle. Lane recognition will use the 2D component of the 3D camera data to search for edges in images containing painted lanes and painted potholes. The output of both computer vision nodes are used to construct an occupancy grid of the vehicle’s surroundings and is sent to the path finding subsystem.

Results

Demo Video

Future Work

Elements of AutoMav will be used by other senior design teams to further improve UTA’s future entries in IGVC.

Project Files

Project Charter 

System Requirements Specification

Architectural Design Specification

Detailed Design Specification

Poster

References

[1] Indian Institute of Technology Bombay http://www.igvc.org/design/2017/8.pdf

[2] Hosei University http://www.igvc.org/design/2017/6.pdf

[3] Bob Jones University http://www.igvc.org/design/2017/2.pdf

[4] IGVC Website http://www.igvc.org

chenc4