Team R.O.B.

Timeline

Summer – Fall 2019

Students

  • Nhan Le
  • Minh Tram
  • Nhan Lam
  • Hunter Nghiem
  • David Chavez

Abstract

To create a gimbal that provides a live feed of surrounding area to a Virtual Reality headset. It gives customers access to a live feed to area that is inaccessible or difficult to access by humans. This allows safe access to places that would be hazardous to humans such as exploring a high radiation area, viewing the undercarriage of a car in a checkpoint for any bombs or restricted components.

Background

There are many places around the world that are not physically reachable by human. These places can include unexplored areas around the world, to man-made disaster areas such as Chernobyl, or the Fukushima nuclear plant in its March 2011 incident. Current solutions which include three dimensional mapping can provide good ideas of how certain areas look, but does not provide enough details of the environment. When it comes to border security, the current solution can be using a piece of glass attached to a pole to check for possible harmful materials underneath vehicles. Even though it is a direct live feed, it is an outdated solution and might not provide enough coverage and take longer to cover certain areas. Other today solutions use robotics and computer vision to provide live feeds to operator. However, some of these camera solutions do not have the quick flexibility when it comes to providing quick video feeds of the environment. Today’s autonomous robots might not be smart enough and will occasionally need operator manual override.

Project Requirements

  1. The camera gimbals system shall synchronize with the user’s head movements using VR headset as a reference.
  2. Camera shall provide live video feed into VR headset display with minimal delay.
  3. Live video feed shall be in color, has adequate resolution with minimal input delay, and running at least sixty frames per second
  4. The time delay between user headset and the camera movement shall be minimal

System Overview

Currently, there are three possible implementations for the project under consideration. These options are: single camera on omnidirectional motorized gimbal that translate VR headset motion to synchronous camera motion, 4-6 cameras pointing in all direction of FOV rendering a “pseudo” realistic world, or 2-3 cameras, 1 acting as the live head motion translation and 1 or 2 cameras updating the surrounding by spinning around an axis and delta-update the video feed. For the single camera system, the camera will be mounted on a rigid, rotating platform that can cover 360◦on the horizontal plane and up to 100◦vertically to simulate commonly used human head motions. The platform will be actuated using a system of stepper motors in combination with mechanical transmission and end-stops initially to provide smooth, precise, and stable camera motion. All communication from the single camera will be feed directly to a computer, which process the video, feed it to the VR headset, and transfer the headset motion back to the motor controllers to execute the movement. For the 4-6 stationary camera system, the VR headset motion will be virtualized and translated to a viewport in a stereo video hemisphere constructed by the set of cameras. Essentially, where the user is looking at in the virtual world will be the only real-time frame. All others area will be set to passive delta updating (change detection) to simulate a live feed. What this means is that what the user see is a virtualized “pseudo-real-time” which looks identical to the view the user have in the one camera system, but eliminating the motion control section and therefore drastically improve response time and image stability. The last solution is a hybrid of the first two solutions, with one mounted camera that simulates head motion and one do the passive updating by rotating around and delta updating the scene frame by frame at a higher rotation speed. This option was considered as a compromise to ensure bandwidth for live viewing and faster real-time video rendering rate since part of the scene is already rendered with the help of a secondary camera.

Ideally, this is how the data should be handled and how the system should operate:

Results

  1. The gimbal system can move synchronously with the VR headset’s movement.
  2. Camera is providing live video feed into VR headset in High Definition resolution at 60 frames per second.
  3. To reduce the physical movement complexity, one rotation axis was taken out. The gimbal system currently support pitch and yaw movements.
  4. On start-up, the gimbal system can adjust itself (homing) to match the current direction of the VR headset with a small offset differences.

Future Work

  1. Adding the 3rd rotation axis (roll)
  2. Miniaturize the gimbal system
  3. Send communication data over
    wireless
  4. Improve live video resolution
  5. Reduce motion sickness to users
  6. Increase range of movement on
    gimbal system

Project Files

ROB_Project_Charter

ROB_System_Requirement_Specifications

ROB_Architectural_Design

ROB_Detailed_Design_Specification

ROB_Poster

jxg4185