Get started with Leaderboard 1.0
This page covers getting started with Leaderboard version 1.0. If you are intending to use the latest Leaderboard version 2.0, please go to this Get Started guide.
In order to start developing your autonomous agent you will need to go through the following steps.
Index
- 1. System setup
- 1.1 Get CARLA 0.9.10.1
- 1.2 Get the Leaderboard and Scenario_Runner
- 1.3 Define the environment variables
- 2. Create autonomous agents with the Leaderboard
- 3. Creating your own Autonomous Agent
- 3.1 Create get_entry_point
- 3.2 Override the setup method
- 3.3 Override the sensors method
- 3.4 Override the run_step method
- 3.5 Override the destroy method
- 3.6 ROS based agents
- 4. Training and testing your agent
1. System setup
1.1 Get CARLA 0.9.10.1
-
Download the binary CARLA 0.9.10.1 release.
-
Unzip the package into a folder, e.g. CARLA.
In the following commands, change the ${CARLA_ROOT}
variable to correspond to your CARLA root folder.
- In order to use the CARLA Python API you will need to install some dependencies in your favorite environment. As a reference, for conda, start by creating a new environment:
conda create -n py37 python=3.7
conda activate py37
cd ${CARLA_ROOT} # Change ${CARLA_ROOT} for your CARLA root folder
pip3 install -r PythonAPI/carla/requirements.txt
- Download the set of additional maps to extend the amount of training data available. Follow the instructions provided here to install these maps.
Make sure to download 0.9.10.1 and its corresponding maps. This is the exact version used by the online servers.
1.2 Get the Leaderboard and Scenario_Runner
- Download the Leaderboard repository.
git clone -b leaderboard-1.0 --single-branch https://github.com/carla-simulator/leaderboard.git
In the following commands, change the ${LEADERBOARD_ROOT}
variable to correspond to your Leaderboard root folder.
- Install the required Python dependencies.
cd ${LEADERBOARD_ROOT} # Change ${LEADERBOARD_ROOT} for your Leaderboard root folder
pip3 install -r requirements.txt
- Download the Scenario_Runner repository.
git clone -b leaderboard --single-branch https://github.com/carla-simulator/scenario_runner.git
In the following commands, change the ${SCENARIO_RUNNER_ROOT}
to correspond to your Scenario_Runner root folder.
- Install the required Python dependencies using the same Python environments.
cd ${SCENARIO_RUNNER_ROOT} # Change ${SCENARIO_RUNNER_ROOT} for your Scenario_Runner root folder
pip3 install -r requirements.txt
1.3 Define the environment variables
We need to make sure that the different modules can find each other.
- Open the
~/.bashrc
profile with the following command:
gedit ~/.bashrc
- Edit your
~/.bashrc
profile, adding the definitions below. Save and close the file after editing.
export CARLA_ROOT=PATH_TO_CARLA_ROOT
export SCENARIO_RUNNER_ROOT=PATH_TO_SCENARIO_RUNNER
export LEADERBOARD_ROOT=PATH_TO_LEADERBOARD
export PYTHONPATH="${CARLA_ROOT}/PythonAPI/carla/":"${SCENARIO_RUNNER_ROOT}":"${LEADERBOARD_ROOT}":"${CARLA_ROOT}/PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg":${PYTHONPATH}
- Remember to source
.bashrc
for these changes to take effect using the following command:
source ~/.bashrc
2. Create Autonomous Agents with the Leaderboard
2.1 First steps with the Leaderboard
The Leaderboard will take care of running your autonomous agent and evaluate its behavior in different traffic situations across multiple routes. To better understand this process, let’s run a basic agent.
- Run the CARLA server in one terminal.
cd ${CARLA_ROOT}
./CarlaUE4.sh -quality-level=Epic -world-port=2000 -resx=800 -resy=600
- In another terminal, navigate to the
${LEADERBOARD_ROOT}
. While the Leaderboard is run using a python script, the amount of arguments used can make it quite uncomfortable to directly do so using the terminal. Therefore, it is recommended to use a bash script:
touch test_run.sh
chmod +x test_run.sh
- Paste the following code into
test_run.sh
. This will set some environment variables for parameterization, and run therun_evaluation.sh
, which will use these variables as arguments to theleaderboard_evaluator.py
script.
# Parameterization settings. These will be explained in 2.2. Now simply copy them to run the test.
export SCENARIOS=${LEADERBOARD_ROOT}/data/all_towns_traffic_scenarios_public.json
export ROUTES=${LEADERBOARD_ROOT}/data/routes_devtest.xml
export REPETITIONS=1
export DEBUG_CHALLENGE=1
export TEAM_AGENT=${LEADERBOARD_ROOT}/leaderboard/autoagents/human_agent.py
export CHECKPOINT_ENDPOINT=${LEADERBOARD_ROOT}/results.json
export CHALLENGE_TRACK_CODENAME=SENSORS
./scripts/run_evaluation.sh
- Run the script:
./test_run.sh
This will launch a pygame window giving you the option to manually control an agent. Follow the route indicated by colorful waypoints in order to get to your destination. The script cycles through 6 towns, loading a single test route in each one.
Follow the route and respect traffic rules until you reach your destination.
Manually interrupting the Leaderboard will preemptively stop the simulation of the route, automatically moving onto the next one.
2.2 Understanding the Leaderboard components
When running the test, we set a series of parameters. Let’s understand these and their role in the Leaderboard.
SCENARIOS
(JSON) — The set of scenarios that will be tested in the simulation. A scenario is defined as a traffic situation. Agents will have to overcome these scenarios in order to pass the test. Participants have access to a set of traffic scenarios that work on the publicly available towns. There are 10 types of scenarios that are instantiated using different parameters. Here is a list of the available scenarios.ROUTES
(XML) — The set of routes that will be used for the simulation. Every route has a starting point (first waypoint), and an ending point (last waypoint). Additionally, they can contain a weather profile to set specific weather conditions. A XML contains many routes, each one with an ID. Users can modify, add, and remove routes for training and validation purposes. The Leaderboard ships with a set of routes for debug, training, and validation. The routes used for the online evaluation are secret.REPETITIONS
(int) — Number of times each route is repeated for statistical purposes.TEAM_AGENT
(Python script) — Path to the Python script that launches the agent. This has to be a class inherited fromleaderboard.autoagents.autonomous_agent.AutonomousAgent
. The steps to create an agent are explained in the next step.
Other relevant parameters are described below.
TEAM_CONFIG
(defined by the user) — Path to an arbitrary configuration file read by the provided agent. You are responsible to define and parse this file within your agent class.DEBUG_CHALLENGE
(int) — Flag that indicates if debug information should be shown during the simulation. By default this variable is unset (0
), which produces no debug information to be displayed. When this is set to1
, the simulator will display the reference route to be followed. If this variable is set to anything greater than1
the engine will print the complete state of the simulation for debugging purposes.CHECKPOINT_ENDPOINT
(JSON) — The name of a file where the Leaderboad metrics will be recorded.RECORD_PATH
(string) — Path to a folder that will store the CARLA logs. This is unset by default.RESUME
— Flag to indicate if the simulation should be resumed from the last route. This is unset by default.CHALLENGE_TRACK_CODENAME
(string) — Track in which the agent is competing. There are two possible options:SENSORS
andMAP
. TheSENSORS
track gives access to use multiple cameras, a LIDAR, a RADAR, a GNSS, an IMU, and a speedometer. In addition to these sensors, theMAP
track allows for direct access to the OpenDRIVE HD map. You are responsible to parse and process the OpenDRIVE map as needed.
These environment variables are passed to ${LEADERBOARD_ROOT}/leaderboard/leaderboard_evaluator.py
, which serves as the entry point to perform the simulation. Take a look at leaderboard_evaluator.py
to find out more details on how your agent will be executed and evaluated.
3. Creating your own Autonomous Agent
The definition of a new agent starts by creating a new class that inherits from leaderboard.autoagents.autonomous_agent.AutonomousAgent
.
3.1 Create get_entry_point
First, define a function called get_entry_point
that returns the name of your new class.
This will be used to automatically instantiate your agent.
from leaderboard.autoagents.autonomous_agent import AutonomousAgent
def get_entry_point():
return 'MyAgent'
class MyAgent(AutonomousAgent):
...
3.2 Override the setup method
Within your agent class override the setup
method. This method performs all the initialization and definitions needed by your agent. It will be automatically called each time a route is initialized. It can receive an optional argument pointing to a configuration file. Users are expected to parse this file. At a minimum, you need to specify in which track you are participating.
from leaderboard.autoagents.autonomous_agent import Track
...
def setup(self, path_to_conf_file):
self.track = Track.SENSORS # At a minimum, this method sets the Leaderboard modality. In this case, SENSORS
The self.track
attribute should be an enum and not a string. It can only take the possible values Track.SENSORS
or Track.MAP
3.3 Override the sensors method
You will also have to override the sensors
method, which defines all the sensors required by your agent.
def sensors(self):
sensors = [
{'type': 'sensor.camera.rgb', 'id': 'Center',
'x': 0.7, 'y': 0.0, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0, 'width': 300, 'height': 200, 'fov': 100},
{'type': 'sensor.lidar.ray_cast', 'id': 'LIDAR',
'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0},
{'type': 'sensor.other.radar', 'id': 'RADAR',
'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0, 'fov': 30},
{'type': 'sensor.other.gnss', 'id': 'GPS',
'x': 0.7, 'y': -0.4, 'z': 1.60},
{'type': 'sensor.other.imu', 'id': 'IMU',
'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': -45.0},
{'type': 'sensor.opendrive_map', 'id': 'OpenDRIVE', 'reading_frequency': 1},
{'type': 'sensor.speedometer', 'id': 'Speed'},
]
return sensors
Most of the sensor attributes have fixed values. These can be checked in agent_wrapper.py
. This is done so that all the teams compete within a common sensor framework.
Every sensor is represented as a dictionary, containing the following attributes:
type
: type of the sensor to be added.id
: the label that will be given to the sensor to be accessed later.other attributes
: these are sensor dependent, e.g.: extrinsics andfov
.
Users can set both intrinsics and extrinsic parameters (location and orientation) of each sensor, in relative coordinates with respect to the vehicle. Please, note that CARLA uses the Unreal Engine coordinate system, which is: x-front
, y-right
, z-up
.
The available sensors are:
sensor.camera.rgb
— Regular camera that captures images.sensor.lidar.ray_cast
— Velodyne 64 LIDAR.sensor.other.radar
— Long-range RADAR (up to 100 meters).sensor.other.gnss
— GPS sensor returning geo location data.sensor.other.imu
— 6-axis Inertial Measurement Unit.sensor.opendrive_map
— Pseudosensor that exposes the HD map in OpenDRIVE format parsed as a string.sensor.speedometer
— Pseudosensor that provides an approximation of your linear velocity.
Trying to set another sensor or misspelling these, will make the set up fail.
You can use any of these sensors to configure your sensor stack. However, in order to keep a moderate computational load we have set the following limits to the number of sensors that can be added to an agent:
sensor.camera.rgb
: 4sensor.lidar.ray_cast
: 1sensor.other.radar
: 2sensor.other.gnss
: 1sensor.other.imu
: 1sensor.opendrive_map
: 1sensor.speedometer
: 1
Trying to set too many units of a sensor will make the set up fail.
There are also spatial restrictions that limit the placement of your sensors within the volume of your vehicle. If a sensor is located more than 3 meters away from its parent in any axis (e.g. [3.1,0.0,0.0]
), the setup will fail.
3.4 Override the run_step method
This method will be called once per time step to produce a new action in the form of a carla.VehicleControl object. Make sure this function returns the control object, which will be use to update your agent.
def run_step(self, input_data, timestamp):
control = self._do_something_smart(input_data, timestamp)
return control
-
input_data: A dictionary containing sensor data for the requested sensors. The data has been preprocessed at
sensor_interface.py
, and will be given as numpy arrays. This dictionary is indexed by the ids defined in thesensor
method. -
timestamp: A timestamp of the current simulation instant.
Remember that you also have access to the route that the ego agent should travel to achieve its destination. Use the self._global_plan
member to access the geolocation route and self._global_plan_world_coord
for its world location counterpart.
3.5 Override the destroy method
At the end of each route, the destroy
method will be called, which can be overriden by your agent, in cases where you need a cleanup. As an example, you can make use of this function to erase any unwanted memory of a network
def destroy(self):
pass
3.6 ROS based agents
In case you want to use ROS as part of your agent, here are some recommendations to take into account:
- ROS Melodic: Due to the leaderboard being only compatible with Python 3, we recommend using roslibpy to communicate between the leaderboard and your ROS stack.
- ROS Noetic: As ROS Noetic is targetting Python 3, you can directly use rospy to create a node to communicate the leaderboard with your ROS stack.
- ROS2 Foxy: Similar to ROS Noetic, you can directly use rclpy to establish the communication with your stack.
4. Training and testing your agent
We have prepared a set of predefined routes to serve as a starting point. You can use these routes for training and verifying the performance of your agent. Routes can be found in the folder {LEADERBOARD_ROOT}/data
:
- routes_training.xml: 50 routes intended to be used as training data (112.8 Km).
- routes_testing.xml: 26 routes intended to be used as verification data (58.9 Km).
4.1 Baselines
- Brady Zhou has created a wonderful starter kit based on the Learning by Cheating approach.