autonomous driving agent
play

AUTONOMOUS DRIVING AGENT An agent by Stylianos Zafeiris for the - PowerPoint PPT Presentation

AUTONOMOUS DRIVING AGENT An agent by Stylianos Zafeiris for the Autonomous Agents (COMP513) course INTRODUCTION 01 02 DEEP Q-NETWORK 03 RESULTS 01 INTRODUCTION What is the project about? MAIN The main idea of the project is to create


  1. AUTONOMOUS DRIVING AGENT An agent by Stylianos Zafeiris for the Autonomous Agents (COMP513) course

  2. INTRODUCTION 01 02 DEEP Q-NETWORK 03 RESULTS

  3. 01 INTRODUCTION What is the project about?

  4. MAIN The main idea of the project is to create an autonomous driving agent which can drive in the CARLA simulator environment without IDEA any user input or the embedded autopilot feature.

  5. CARLA SIMULATOR CARLA is an open-source autonomous driving simulator used in AD research. It is scalable because of its server multi-client architecture ❖ It provides a powerful Python API ❖ Sensor diversity ❖ Works with ROS ❖

  6. A simple agent that can drive with Simple agent autopilot was introduced DEVELOPMENT Data The simple agent used to gather training data acquisition STAGES Model trained with Model Training gathered data On the fly The trained model was further trained with live model training data

  7. 02 DEEP Q-NETWORK Network architecture

  8. Why use Deep Q-Networks? Continuous State space is continuous and as a result the are infinite number of states, state space so discretization is computationally prohibited. Deep Q-Networks get the continuous real-time input from sensors and learns the applicable action.

  9. NETWORK ARCHITECTURE The proposed network consists of: ● 5 Convolutional layers, each followed by a MaxPoolong layer of stride 2x2 ● 2x Dense layers of 100 nodes ● 1 Dense layer with 50 nodes and ● 1 Dense layer with 3 outputs Each layer has tanh as an activation function so both negative and positive values can be propagated. The input to the network is the image taken from the RGB camera sensor attached to the test vehicle and the model gives as output three numbers as a tuple of type (throttle, brake, steer) which is used to drive the vehicle autonomously. The labels used for training were tuples, as previously, taken from the control applied to the test vehicle at the same time the image was taken. We can observe the it has the form of a Convolutional Neural Network (CNN) and the last layers instead of the usual softmax activation used for the classification process, they use tanh as described.

  10. TRAINING This process was the most computational THE expensive one. In the first place the gathered data used to train the network, but next live data read from sensors used to increase accuracy. NETWORK

  11. 25,500+ samples used to train the network, but there is need for more training

  12. 03 RESULTS Experimental results and conclusion

  13. Model accuracy While the model was trained with gathered data the accuracy was 54.04 %. This was the highest accuracy with 3.500 samples used in the training process and determined the model hyperparameters.

  14. Model results Once the network was trained with the first 3.500 samples the behavior of the vehicle wasn’t the expected. That is why it was re trained with about 20.500 more samples which caused moderate improvement, but still the results weren’ t promising. To improve the model’ s accuracy it must be re trained with more samples, but this is a very computationally expensive and time consuming procedure.

  15. THANKS! Do you have any questions? szafeiris@isc.tuc.gr CREDITS: This presentation template was created by Slidesgo , including icons by Flaticon , and infographics & images by Freepik .

Recommend


More recommend