Getting Start with MetaDrive
Tryout MetaDrive with one line
We provide a script to let you try out MetaDrive by keyboard immediately after installation! Please run:
# Make sure current folder does not have a sub-folder named metadrive python -m metadrive.examples.drive_in_single_agent_env
In the same script, you can even experience an “auto-drive” journey carried out by our pre-trained RL agent. Press T in the main window will kick-off this. You can also press H to visit the helper information on other shortcuts.
To enjoy the process of generating map through our Procedural Generation (PG) algorithm, please run this script:
python -m metadrive.examples.procedural_generation
You can also draw multiple maps generated by PG in the top-down view via running:
python -m metadrive.examples.draw_maps
Besides, you can verify the efficiency of MetaDrive via running:
python -m metadrive.examples.profile_metadrive
As we will discuss in Existing RL Environments, MetaDrive provides three sets of RL environments: the generalization environments, the Safe RL environments and the Multi-agent RL environments. We provide the examples for those suites as follow:
# Make sure current folder does not have a sub-folder named metadrive # ===== Generalization Environments ===== python -m metadrive.examples.drive_in_single_agent_env # ===== Safe RL Environments ===== python -m metadrive.examples.drive_in_safe_metadrive_env # ===== Multi-agent Environments ===== # Options for --env: roundabout, intersection, tollgate, bottleneck, parkinglot, pgma python -m metadrive.examples.drive_in_multi_agent_env --env pgma
Using MetaDrive in Your Code
The usage of MetaDrive is as same as other gym environments. Almost all decision making algorithms are compatible with MetaDrive, as long as they are compatible with OpenAI gym. The following scripts is a minimal example for instantiating a MetaDrive environment instance
import metadrive # Import this package to register the environment! import gym env = gym.make("MetaDrive-v0", config=dict(use_render=True)) env.reset() for i in range(1000): obs, reward, done, info = env.step(env.action_space.sample()) env.render() if done: env.reset() env.close()
Please note that each process should only have one single MetaDrive instance due to the limit of the underlying simulation engine. As a workaround, we provide an asynchronous version of MetaDrive through Ray framework, please find the environment in remove_env.py.
You can also try out our example of using RLLib to train RL policies in Training with RLLib.