Reimagining the Dog: New DeepMind Models and Tutorial for Physics-Based RL Tasks
DeepMind researchers this week released several new models and a tutorial for their dm_control software stack for physics-based simulation and reinforcement learning (RL) environments using MuJoCo physics.
The dm_control toolkit comprises Python libraries and task suites for RL agents in an articulated-body simulation. It has been around for a couple of years, initially designed by DeepMind researchers and engineers to facilitate their own continuous control and robotics needs. Dm_control has been applied extensively across the UK-based AI company’s projects, serving as a fundamental component of continuous control research.
The dm_control package is open-sourced on GitHub, where it has received nearly 2,000 stars. An introductory tutorial for the package is also available as a Colaboratory notebook.
A DeepMind dm_control blog post explains that the package also includes a MuJoCo wrapper to provides convenient bindings to functions and data structures, PyMJCF and Composer libraries that enable procedural model manipulation and task authoring, and a Control Suite.
The researchers say MuJoCo’s serve of names for all model elements enables strings to index and slice into arrays, which leads to a much more robust, readable codebase. The PyMJCF library creates a Python unbiased hierarchy with 1:1 correspondence to a MuJoCo model. Composer can be considered as the “game engine” framework, which defines a particular order of runtime function calls and abstracts the affordances of reward, termination and observation.
The DeepMind Control Suite is a set of continuous control tasks with a standardized structure and rewards intended to attend as performance benchmarks for RL agents. The researchers also added a delightful new dog environment — although the dogs now been temporarily removed due to a VFS bug. At least they doesn’t have fleas!
Also in the update are a set of configurable manipulation tasks with a robot arm and snap-together bricks as well as several locomotion tasks with scenarios such as soccer playing. The researchers have also made a locomotion framework available, which provides high-level abstractions and examples of locomotion tasks.
The paper Dm_control: Software and Tasks for Continuous Control is on arXiv.
Journalist: Yuan Yuan | Editor: Michael Sarazen
We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.
The dm_control toolkit comprises Python libraries and task suites for RL agents in an articulated-body simulation. It has been around for a couple of years, initially designed by DeepMind researchers and engineers to facilitate their own continuous control and robotics needs. Dm_control has been applied extensively across the UK-based AI company’s projects, serving as a fundamental component of continuous control research.
The dm_control package is open-sourced on GitHub, where it has received nearly 2,000 stars. An introductory tutorial for the package is also available as a Colaboratory notebook.
A DeepMind dm_control blog post explains that the package also includes a MuJoCo wrapper to provides convenient bindings to functions and data structures, PyMJCF and Composer libraries that enable procedural model manipulation and task authoring, and a Control Suite.
The researchers say MuJoCo’s serve of names for all model elements enables strings to index and slice into arrays, which leads to a much more robust, readable codebase. The PyMJCF library creates a Python unbiased hierarchy with 1:1 correspondence to a MuJoCo model. Composer can be considered as the “game engine” framework, which defines a particular order of runtime function calls and abstracts the affordances of reward, termination and observation.
The DeepMind Control Suite is a set of continuous control tasks with a standardized structure and rewards intended to attend as performance benchmarks for RL agents. The researchers also added a delightful new dog environment — although the dogs now been temporarily removed due to a VFS bug. At least they doesn’t have fleas!
Also in the update are a set of configurable manipulation tasks with a robot arm and snap-together bricks as well as several locomotion tasks with scenarios such as soccer playing. The researchers have also made a locomotion framework available, which provides high-level abstractions and examples of locomotion tasks.
The paper Dm_control: Software and Tasks for Continuous Control is on arXiv.
Journalist: Yuan Yuan | Editor: Michael Sarazen
We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.
Sincery All Tips collection
SRC: https://syncedreview.com/2020/06/25/reimagining-the-dog-new-deepmind-models-and-tutorial-for-physics-based-rl-tasks/
powered by Blogger News Poster
0 Response to "Reimagining the Dog: New DeepMind Models and Tutorial for Physics-Based RL Tasks | Synced"
Post a Comment