List

About the book

Discrete prediction and control problems include higher-level or behavioral decision making where the space of available actions to take or decisions to make is discrete and countable. Several real-world problems can be simplified if not solved, using intelligent agents that can learn and adapt to make optimal discrete decisions or actions. The capability to deal with continuous-valued spaces enables more finer control. Intelligent software agents that can learn to make optimal continuous-valued decisions or actions enable a vast majority of problems and tasks to be approachable and solvable by machines.

Your guide to developing AI agents using deep reinforcement learning.

 

 

 

 

 

 

 

 

 

Preview · 51 pages

 

Github   |   Amazon   |   Safari   |   Google Books    |    Packt    |   Play Store

If you are someone wanting to get a head start in this direction of building intelligent agents to solve problems and you are looking for a structured yet concise and hands-on approach to follow, you will enjoy this book and the code repository. The chapters in this book and the accompanying code repository is aimed at being simple to understand and easy to follow. While simple language is used everywhere possible to describe the algorithms, the core theoretical concepts including the mathematical equations are laid out with brief and intuitive explanations as they are essential for understanding the code implementation and for further modifications and tailoring by the readers.

The book begins by introducing the readers to learning based intelligent agents, environments to train these agents and the tools and frameworks necessary to implement these agents. In particular, the book concentrates on deep reinforcement learning based intelligent agents that combine deep learning and reinforcement learning. The learning environments, which define the problem to be solved or the tasks to be completed, used in the book are based on the open source, OpenAI Gym library. PyTorch is the deep learning framework used for the learning agent implementations. All the code and scripts necessary to follow the book chapter-by-chapter are made available at the following GitHub repository: Hands-On-Intelligent-Agents-With-OpenAI-Gym. Readers are encouraged to follow the repository for code updates, extra documentation and additional resources.

Chapter-wise summary of what is covered in the book

Chapter 1, Introduction to Intelligent Agents and Learning Environments, which enables the development of several AI systems. It sheds light on the important features of the toolkit, which provides you with endless opportunities to create autonomous intelligent agents to solve several algorithmic tasks, games, and control tasks. By the end of this chapter, you will know enough to create an instance of a Gym environment using Python yourself.

Chapter 2, Reinforcement Learning and Deep Reinforcement Learning, provides a concise explanation of the basic terminologies and concepts in reinforcement learning. The chapter will give you a good understanding of the basic reinforcement learning framework for developing AI agents. The chapter will also introduce deep reinforcement learning and provide you with a flavor of the types of advanced problem the algorithms enable you to solve.

Chapter 3, Getting Started with OpenAI Gym and Deep Reinforcement Learning, jumps right in and gets your development machine/computer ready with all the required installations and configurations needed for using the learning environments as well as PyTorch for developing deep learning algorithms.

Chapter 4, Exploring the Gym and its Features, walks you through the inventory of learning environments available with the Gym library starting with the overview of how the environments are classified and named which will help you choose the correct version and type of environments from the 700+ learning environments available. You will then learn to explore the Gym, test out any of the environment you would like to, understand the interface and description of various environments.

Chapter 5, Implementing your First Learning Agent – Solving the Mountain Car problem, explains how to implement an AI agent using reinforcement learning to solve the mountain car problem. You will implement the agent, train it, and see it improve on its own. The implementation details will enable you to apply the concepts to develop and train an agent to solve various other tasks and/or games.

Chapter 6, Implementing an Intelligent Agent for Optimal Control using Deep Q-Learning, will walk the readers through the process of scaling up the agent implementation to the next level by improving the agent’s learning algorithm as well as developing and integrating useful utilities and routines to log, visualize and configure the agent’s performance. In particular, chapter 6 will guide the readers through the process of improving the Q-learning algorithm using deep Q-networks, experience replay memory and double-Q learning. Helpful utilities and routines that are helpful in general learning system implementations discussed in chapter 6 includes a decay scheduler (used for -decay), logging routines and performance visualization using Tensorboard, JSON files based parameter management, Atari gym environment wrappers, and pre-processing routines for PyTorch based training scripts.

Chapter 7, Creating Custom OpenAI Gym Environments – Carla Driving Simulator, will teach you how to convert a real-world problem into a learning environment with interfaces compatible with the OpenAI Gym. You will learn the anatomy of Gym environments and create your custom learning environment based on the Carla simulator that can be registered with the Gym and used for training agents that we develop.

Chapter 8, Implementing an Intelligent & Autonomous Car Driving Agent using Deep Actor-Critic Algorithm, teaches you the fundamentals of the Policy Gradient based reinforcement learning algorithms and helps you intuitively understand the deep n-step advantage actor-critic algorithm. You will then learn to implement a super-intelligent agent that can drive a car autonomously in the Carla simulator using both the synchronous as well as asynchronous implementation of the deep n-step advantage actor-critic algorithm.

Chapter 9, Exploring the Learning Environment Landscape – Roboschool, Gym-Retro, StarCraft-II, DeepMindLab, takes you beyond the Gym and shows you around other well developed suite of learning environments that you can use to train your intelligent agents. You will understand and learn to use the various Roboschool environments, the Gym Retro environments, the very popular Star Craft II environment and the DeepMind Lab
environments.

Chapter 10, Exploring the Learning Algorithm Landscape – DDPG (Actor-Critic), PPO (Policy- Gradient), Rainbow (Value-Based), Provides insights into latest deep reinforcement learning algorithms with their fundamentals demystified based on what you learned in the previous chapters of this book. You will get a quick understanding of the core concepts behind the best algorithms in the three different classes of deep reinforcement learning algorithms namely: The actor-critic based Deep Deterministic Policy Gradient (DDPG) algorithm, the Policy Gradient based Proximal Policy Optimization (PPO) and the value based Rainbow algorithm.

Hopefully you enjoy the book and gain hands-on experience building intelligent agents using Deep Reinforcement Learning implemented using PyTorch.

Github   |   Amazon   |   Safari   |   Google Books    |    Packt    |   Play Store

Facebooktwittergoogle_plusredditpinterestlinkedinmail
google_pluslinkedinrssyoutube
Summary
product image
Author Rating
1star1star1star1star1star
Aggregate Rating
5 based on 2 votes
Brand Name
Intelligent Learning Agents
Product Name
Hands-on Intelligent Agents with OpenAI Gym: Your guide to developing AI agents using deep reinforcement learning
Price
USD 19.70
Product Availability
Available in Stock

  Posts

September 8th, 2018

Implementing an Intelligent & Autonomous, Car-Driving Agent using Deep n-step Actor-Critic Algorithm

Condensed quick-start version of Chapter 8: Implementing an Intelligent & Autonomous Car Driving Agent using Deep Actor-Critic Algorithm discussed in the Hands-on Intelligent agents with OpenAI Gym book. The first section lists down the concepts covered and then dives straight into the code structure and elaborates on how you can train deep n-step advantage actor-critic agents in the Carla driving environment

August 12th, 2018

Hands-On Intelligent Agents With OpenAI Gym (HOIAWOG)!: Your guide to developing AI agents using deep reinforcement learning

This book will guide you through the process of implementing your own intelligent agents to solve both discrete- and continuous-valued sequential decision-making problems with all the essential building blocks to develop, debug, train, visualize, customize, and test your intelligent agent implementations in a variety of learning environments, ranging from the Mountain Car and Cart Pole problems to Atari games and CARLA – an advanced simulator for autonomous driving. If you are someone wanting to get a head start in the direction of building intelligent agents to solve problems and you are looking for a structured yet concise and hands-on approach to follow, you will enjoy this book and the code repository. The chapters in this book and the accompanying code repository is aimed at being simple to understand and easy to follow. While simple language is used everywhere possible to describe the algorithms, the core theoretical concepts including the mathematical equations are laid out with brief and intuitive explanations as they are essential for understanding the code implementation and for further modifications and tailoring by the readers.
The book begins by introducing the readers to learning based intelligent agents, environments to train these agents and the tools and frameworks necessary to implement these agents. In particular, the book concentrates on deep reinforcement learning based intelligent agents that combine deep learning and reinforcement learning. The learning environments, which define the problem to be solved or the tasks to be completed, used in the book are based on the open source, OpenAI Gym library. PyTorch is the deep learning framework used for the learning agent implementations. All the code and scripts necessary to follow the book chapter-by-chapter are made available at the following
GitHub repository: Hands-On-Intelligent-Agents-With-OpenAI-Gym

August 11th, 2018

10 Machine Learning Tools that made it big in 2018

Developing smart, intelligent models has now become easier than ever thanks to the extensive research into and development of newer and more efficient tools and frameworks. While the likes of Tensorflow, Keras, PyTorch and some more have ruled the roost in 2017 as the top machine learning and deep learning libraries, 2018 had promises to be even more exciting with a strong line-up of open source and enterprise tools ready to take over – or at least compete with – the current lot.

In this article, we take a look at 10 such tools and frameworks which made it big in 2018.

January 27th, 2015

Tele-operating Andy–The lunar rover that is expected to bag the $20 Google Lunar Xprize

Andy-The lunar rover from CMU Tele-operating the Andy rover! If you are wondering who is Andy, read more here:http://www.cmu.edu/news/stories/archives/2014/november/november24_lunarroverandy.html Recently, [...]
September 28th, 2013

Prebuilt OpenCV library files for arm-linux-gnueabihf

Good news!… Now you can start to use OpenCV on your ARM based in no time! Just download the OpenCV-2.4.9-for-arm shared […]

August 23rd, 2013

Building OpenCV 2.4.x with full hardware optimization for the Pandaboard ES

This wiki will walk you through the process of cross compiling the latest (bleeding edge) OpenCV library with SIMD optimization […]

August 17th, 2013

Building Qt 5.x on pandaboard ES with OpenGL ES

Qt 4.x used a platform plugin for the OpenGL ES 2 The platform plugin approach: the HW platform is expected […]

August 16th, 2013

Installing libjpeg turbo on pandaboard with vectorization for SIMD utilizing the neon co processor

**Jpeg turbo** libjpeg-turbo  is a fork of the original IJG libjpeg which uses SIMD to accelerate baseline JPEG compression and decompression. […]

August 16th, 2013

Setting up an environment using chroot for developing applications for embedded targets

Setting up an environment using chroot for developing applications for embedded targets

August 16th, 2013

Setting up a cross compiling environment to build linux applications for embedded targets

1. Setting up an environment using scratchbox 2 for developing applications for embedded targets. This wiki will walk you through the […]