MultiTask Environments for Reinforcement Learning.


MTEnv is a library to interface with environments for multi-task reinforcement learning. It has two main components:

Together, these two components should provide a standard interface for multi-task RL environments and make it easier to reuse components and tools across environments.

You can read more about the difference between MTEnv and single-task environments here.

Citing MTEnv

If you use MTEnv in your research, please use the following BibTeX entry:

  author =       {Shagun Sodhani and Ludovic Denoyer and Pierre-Alexandre Kamienny and Olivier Delalleau},
  title =        {MTEnv - Environment interface for mulit-task reinforcement learning},
  howpublished = {Github},
  year =         {2021},
  url =          {}


MTEnv has two components - a core API and environments that implement the API.

The Core API can be installed via pip install mtenv or pip install git+

The list of environments, that implement the API, is available here. Any of these environments can be installed via pip install git+"mtenv[env_name]". For example, the MetaWorld environment can be installed via pip install git+"mtenv[metaworld]".

All the environments can be installed at once using pip install git+"mtenv[all]". However, note that some environments may have incompatible dependencies.

MTEnv can also be installed from the source by first cloning the repo (git clone, cding into the directory cd mtenv, and then using the pip commands as described above. For example, pip install mtenv to install the core API, and pip install "mtenv[env_name]" to install a particular environment.


MTEnv provides an interface very similar to the standard gym environments. One key difference between multitask environments (that implement the MTEnv interface and single tasks environments is in terms of observation that they return.

MultiTask Observation

The multitask environments returns a dictionary as the observation. This dictionary has two keys: (i) env_obs which maps to the observation from the environment (i.e. the observation that a single task environments return) and (ii) task_obs which maps to the task-specific information from the environment. In the simplest case, task_obs can be an integer denoting the task index. In other cases, task_obs can provide richer information.

from mtenv import make
env = make("MT-MetaWorld-MT10-v0")
obs = env.reset()
# {'env_obs': array([-0.03265039,  0.51487777,  0.2368754 , -0.06968209,  0.6235982 ,
#    0.01492813,  0.        ,  0.        ,  0.        ,  0.03933976,
#    0.89743189,  0.01492813]), 'task_obs': 1}
action = env.action_space.sample()
# array([-0.76422   , -0.15384133,  0.74575615, -0.11724994], dtype=float32)
obs, reward, done, info = env.step(action)
# {'env_obs': array([-0.02583682,  0.54065546,  0.22773503, -0.06968209,  0.6235982 ,
#    0.01494118,  0.        ,  0.        ,  0.        ,  0.03933976,
#    0.89743189,  0.01492813]), 'task_obs': 1}

Contributing to MTEnv

There are several ways to contribute to MTEnv.

  1. Use MTEnv in your research.

  2. Contribute a new environment. We support many environments via MTEnv and are looking forward to adding more environments. Contributors will be added as authors of the library. You can learn more about the workflow of adding an environment here.

  3. Check out the good-first-issues on GitHub and contribute to fixing those issues.

  4. Check out additional details here.


Ask questions in the chat or github issues:


Task State

Task State contains all the information that the environment needs to switch to any other task.