Nebuloo

Team Name

Timeline

Summer 2019 – Fall 2019

Students

  • Alex Waumann
  • Sunil Niroula
  • Juan Barragan
  • Luis Gonzalez Trejo

Abstract

To create an efficient testing environment to train AI models for a wide variety of applications, specially, on the training of autonomous anti-aircraft weapons.

Background

There are currently two parts to the problem we are trying to solve. First is the use of human operators for ground-based anti-aircraft weapons. They generally use a remote and a camera view to operate the weapon. This is expensive since there is generally one operator to each machine and extremely inefficient since human brains have low-capacity working memory. This makes decision making in critical situations slower and more unreliable than it should be. Aside from decision making, aiming and tracking to accurately shoot down targets is a difficult task for a human and generally requires a lot of constant practice to remain proficient at performing this task. Second is the issue the training, whether it be humans or autonomous systems, operators to operate these machines taking a lot of time and money in the physical world. It can even require schedules sessions, permits and permissions, and other obstacles that make it more time consuming and expensive to train operators. This time constraints limits the number of test scenarios that operators can be trained on.

The solution to the first problem is an autonomous system. In general, this system should be able to identify, track, and shoot down planes, although this can be generalized to be any flying object. This system should be able to use various data sources to decide when and if it should shoot an aircraft down. However, automating the operator doesn’t solve training problems. It will still be expensive and time consuming to train these systems, albeit less expensive than human operators. This is because the autonomous system will still need to be trained on various scenarios and tested in those same scenarios to verify that the system performs as expected and properly makes the correct decision more often than its human counterpart.

The solution to the second problem is a hyper-realistic simulated training environment. It needs to be close enough to reality that training in the simulated environment transfers well into the physical environment. This has already been proven to work well with autonomous robot hands and self-driving cars. It is also used to train human operators for various unmanned vehicles. It would make training cheaper, faster, and more robust. Scenarios can be procedurally generated to cover a large scenario-space than can be done when doing physical training/testing. The simulations can also be ran at very fast speeds to cover years worth of training in days or weeks.

Project Requirements

  1. User should be able to create custom simulated environments using existing 3D models.
  2. Simulation engine should be able to render scenes at a constant 60 fps.
  3. Simulation engine + AI agent should run together at 24 fps at the very least.
  4. Simulation engine will be able to import 3D models in glTF format.
  5. AI Agent Layer + Controller layer should work together to track moving objects in a scene.
  6. Controller layer should handle user input and map to certain actions.
  7. Controller layer will allow the user to move around the simulated environment.

System Overview

With the current design, there will be a core rendering engine using the Vulkan C++ API. This rendering engine will be used to make the simulation module. The simulation model will output frame data and send it into the object detection/recognition module. This module will alter the frame to add bounding boxes around recognized objects and add labels to them. The altered frame will then be forwarded to the object tracking module along with data on which object it should track. This module will generate a movement suggestion that will be sent the the controller module. The controller module will receive user input data (if in manual mode) and simulated external data that will be available to the software in a physical system. This could include data such as distance and elevation estimates of the target. This will product actions that are sent to the simulation module. The module will provide handles to pan around and to shoot.

The user will be presented with a camera view of the anti-aircraft weapon. It will show the bounding boxes and labels around recognized objects. There will be a software switch for the user to switch between autonomous mode and manual mode. In autonomous mode, the software will look for an aircraft, track it, and shoot it down (if it decided to shoot it down while tracking it). If the software is unsure whether it should shoot it down (decided by some threshold value) then it will show the user a view showing why it is unsure and ask the user whether it should shoot down the target. In manual mode, the user will be able to pan around and select any object with a bounding box. Once selected, the software will takeover until it shoots down the target. The user will be able to cancel this action at any point leading up to the shot. When a missile is launched, its camera view will be overlayed in a corner of the main weapon camera view. It will show the missile seeking out the selected target.

Results

As of demo day, the simulation engine has met a lot of the planned features, however, the engine is nowhere near completion as the task proved to be more technically challenging than originally anticipated. The controller layer is mostly there, handling the user input to be received by the simulation engine. The AI agent/layer works individually, but we ran out of time to integrate it with the rest of the system. The AI agent/layer is able to detect and track basic objects, however, the pipeline to connect the controller to this layer proved to be quite time consuming and more complex than originally thought.

Results text and demo videos go here

Future Work

  1. Add a more friendly user interface.
  2. Add texture mapping to 3D models.
  3. Finish the AI/Simulation engine integration.
  4. Add physics to our simulation engine.
  5. Add an easier method to importation custom 3D models.

Project Files

Project Charter (NebulooProjectCharter)

System Requirements Specification (link)

Architectural Design Specification (NebulooADS)

Detailed Design Specification (NebulooDDS)

Poster (Nebuloo-Poster)

References

  1. Kronos Vulkan Documentation, https://www.khronos.org/registry/vulkan/specs/1.1/styleguide.html
  2. Vulkan Tutorial, https://vulkan-tutorial.com/Introduction
  3. YOLO: Real-Time Object Detection, https://pjreddie.com/darknet/yolo/
  4. Real-time Object Detection with YOLO, YOLOv2 and now YOLOv3, by Jonathan Hui,
    https://medium.com/@jonathan_hui/real-time-object-detection-with-yolo-yolov2-28b1b93e2088

jxg4185