Abstract: We address the problem of learning autonomous driving behaviors in urban intersections using deep reinforcement learning (DRL). DRL has become a popular choice for creating autonomous agents due to its success in various tasks. However, as the problems tackled become more complex, the number of training iterations necessary increase drastically. Curriculum learning has been shown to reduce the required training time and improve the performance of the agent, but creating an optimal curriculum often requires human handcrafting. In this work, we learn a policy for urban intersection crossing using DRL and introduce a method to automatically generate the curriculum for the training process from a candidate set of tasks. We compare the performance of the automatically generated curriculum (AGC) training to those of randomly generated sequences and show that AGC can significantly reduce the training time while achieving similar or better performance.
keywords: {learning (artificial intelligence);mobile robots;optimal curriculum;human handcrafting;urban intersection crossing;DRL;training process;automatically generated curriculum training;randomly generated sequences;autonomous vehicles;urban environment;autonomous driving behaviors;urban intersections;deep reinforcement learning;autonomous agents;training iterations necessary increase;curriculum learning;required training time;AGC;Training;Autonomous vehicles;Task analysis;Learning (artificial intelligence);Machine learning;Heuristic algorithms},
Citing: Z. Qiao, K. Muelling, J. M. Dolan, P. Palanisamy and P. Mudalige, “Automatically Generated Curriculum based Reinforcement Learning for Autonomous Vehicles in Urban Environment,” 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, 2018, pp. 1233-1238.
doi: 10.1109/IVS.2018.8500603


September 8th, 2018

Implementing an Intelligent & Autonomous, Car-Driving Agent using Deep n-step Actor-Critic Algorithm

Condensed quick-start version of Chapter 8: Implementing an Intelligent & Autonomous Car Driving Agent using Deep Actor-Critic Algorithm discussed in the Hands-on Intelligent agents with OpenAI Gym book. The first section lists down the concepts covered and then dives straight into the code structure and elaborates on how you can train deep n-step advantage actor-critic agents in the Carla driving environment

August 12th, 2018

Hands-On Intelligent Agents With OpenAI Gym (HOIAWOG)!: Your guide to developing AI agents using deep reinforcement learning

This book will guide you through the process of implementing your own intelligent agents to solve both discrete- and continuous-valued sequential decision-making problems with all the essential building blocks to develop, debug, train, visualize, customize, and test your intelligent agent implementations in a variety of learning environments, ranging from the Mountain Car and Cart Pole problems to Atari games and CARLA – an advanced simulator for autonomous driving. If you are someone wanting to get a head start in the direction of building intelligent agents to solve problems and you are looking for a structured yet concise and hands-on approach to follow, you will enjoy this book and the code repository. The chapters in this book and the accompanying code repository is aimed at being simple to understand and easy to follow. While simple language is used everywhere possible to describe the algorithms, the core theoretical concepts including the mathematical equations are laid out with brief and intuitive explanations as they are essential for understanding the code implementation and for further modifications and tailoring by the readers.
The book begins by introducing the readers to learning based intelligent agents, environments to train these agents and the tools and frameworks necessary to implement these agents. In particular, the book concentrates on deep reinforcement learning based intelligent agents that combine deep learning and reinforcement learning. The learning environments, which define the problem to be solved or the tasks to be completed, used in the book are based on the open source, OpenAI Gym library. PyTorch is the deep learning framework used for the learning agent implementations. All the code and scripts necessary to follow the book chapter-by-chapter are made available at the following
GitHub repository: Hands-On-Intelligent-Agents-With-OpenAI-Gym

August 11th, 2018

10 Machine Learning Tools that made it big in 2018

Developing smart, intelligent models has now become easier than ever thanks to the extensive research into and development of newer and more efficient tools and frameworks. While the likes of Tensorflow, Keras, PyTorch and some more have ruled the roost in 2017 as the top machine learning and deep learning libraries, 2018 had promises to be even more exciting with a strong line-up of open source and enterprise tools ready to take over – or at least compete with – the current lot.

In this article, we take a look at 10 such tools and frameworks which made it big in 2018.

January 27th, 2015

Tele-operating Andy–The lunar rover that is expected to bag the $20 Google Lunar Xprize

Andy-The lunar rover from CMU Tele-operating the Andy rover! If you are wondering who is Andy, read more here: Recently, [...]
September 28th, 2013

Prebuilt OpenCV library files for arm-linux-gnueabihf

Good news!… Now you can start to use OpenCV on your ARM based in no time! Just download the OpenCV-2.4.9-for-arm shared […]

August 23rd, 2013

Building OpenCV 2.4.x with full hardware optimization for the Pandaboard ES

This wiki will walk you through the process of cross compiling the latest (bleeding edge) OpenCV library with SIMD optimization […]

August 17th, 2013

Building Qt 5.x on pandaboard ES with OpenGL ES

Qt 4.x used a platform plugin for the OpenGL ES 2 The platform plugin approach: the HW platform is expected […]

August 16th, 2013

Installing libjpeg turbo on pandaboard with vectorization for SIMD utilizing the neon co processor

**Jpeg turbo** libjpeg-turbo  is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. […]

August 16th, 2013

Setting up an environment using chroot for developing applications for embedded targets

Setting up an environment using chroot for developing applications for embedded targets

August 16th, 2013

Setting up a cross compiling environment to build linux applications for embedded targets

1. Setting up an environment using scratchbox 2 for developing applications for embedded targets. This wiki will walk you through the […]