Get started

In order to start developing your autonomous agent you will need to go through the following steps.


1. System setup

1.1 Get CARLA

${CARLA_ROOT} corresponds to your CARLA root folder. Change this for your CARLA root folder when copying the commands below.

  • In order to use the CARLA Python API you will need to install some dependencies in your favorite environment. As a reference, for conda, start by creating a new environment:
conda create -n py37 python=3.7
conda activate py37
cd ${CARLA_ROOT}  # Change ${CARLA_ROOT} for your CARLA root folder

pip3 install -r PythonAPI/carla/requirements.txt
easy_install PythonAPI/carla/dist/carla-0.9.10-py3.7-linux-x86_64.egg
  • Feel free to download the set of additional maps to extend the amount of training data available. Follow the instructions provided here to install these maps.

Make sure to download and its corresponding maps. This is the exact version used by the online servers.

1.2 Get the Leaderboard and Scenario_Runner

  • Download the Leaderboard repository.
git clone -b stable --single-branch

${LEADERBOARD_ROOT} corresponds to your Leaderboard root folder. Change this for your Leaderboard root folder when copying the commands below.

  • Install the required Python dependencies.
cd ${LEADERBOARD_ROOT} # Change ${LEADERBOARD_ROOT} for your Leaderboard root folder
pip3 install -r requirements.txt
  • Download the Scenario_Runner repository.
git clone -b leaderboard --single-branch

${SCENARIO_RUNNER_ROOT} corresponds to your Scenario_Runner root folder. Change this for your Scenario_Runner root folder when copying the commands below.

  • Install the required Python dependencies using the same Python environments.
cd ${SCENARIO_RUNNER_ROOT} # Change ${SCENARIO_RUNNER_ROOT} for your Scenario_Runner root folder
pip3 install -r requirements.txt

1.3 Define the environment variables

We need to make sure that the different modules can find each other.

  • Edit your ~/.bashrc profile, adding the following definitions:
  • Open the ~/.bashrc profile with the following command. Remember to save your changes before closing.
gedit ~/.bashrc
  • Remember to source .bashrc for these changes to take effect.
source ~/.bashrc

2. Create Autonomous Agents with the Leaderboard

2.1 First steps with the Leaderboard

The Leaderboard will take care of running your autonomous agent and evaluate its behavior in different traffic situations across multiple routes. To better understand this process, let’s run a basic agent.

  • Run the CARLA server in one terminal.
./ -quality-level=Epic -world-port=2000 -resx=800 -resy=600
  • Create a Python script that sets some environment variables for parameterization, and runs the The script should be similar to the following.

#Parameterization settings. These will be explained in 2.2. Now simply copy them to run the test.
export SCENARIOS=${LEADERBOARD_ROOT}/data/all_towns_traffic_scenarios_public.json
export ROUTES=${LEADERBOARD_ROOT}/data/routes_devtest.xml
export TEAM_AGENT=${LEADERBOARD_ROOT}/leaderboard/autoagents/


This will launch a pygame window giving you the option to manually control an agent. Follow the route indicated by colorful waypoints in order to get to your destination.

Follow the route and respect traffic rules until you reach your destination.

2.2 Understanding the Leaderboard components

When running the test, we set a series of parameters. Let’s understand these and their role in the Leaderboard.

  • SCENARIOS (JSON) — The set of scenarios that will be tested in the simulation. A scenario is defined as a traffic situation. Agents will have to overcome these scenarios in order to pass the test. Participants have access to a set of traffic scenarios that work on the publicly available towns. There are 10 types of scenarios that are instantiated using different parameters. Here is a list of the available scenarios.
  • ROUTES (XML) — The set of routes that will be used for the simulation. Every route has a starting point (first waypoint), and an ending point (last waypoint). Additionally, they can contain a weather profile to set specific weather conditions. A XML contains many routes, each one with an ID. Users can modify, add, and remove routes for training and validation purposes. The Leaderboard ships with a set of routes for debug, training, and validation. The routes used for the online evaluation are secret.
  • REPETITIONS (int) — Number of times each route is repeated for statistical purposes.
  • TEAM_AGENT (Python script) — Path to the Python script that launches the agent. This has to be a class inherited from leaderboard.autoagents.autonomous_agent.AutonomousAgent. The steps to create an agent are explained in the next step.

Other relevant parameters are described below.

  • TEAM_CONFIG (defined by the user) — Path to an arbitrary configuration file read by the provided agent. You are responsible to define and parse this file within your AutonomousAgent class.
  • DEBUG_CHALLENGE (int) — Flag that indicates if debug information should be shown during the simulation. By default this variable is unset (0), which produces no debug information to be displayed. When this is set to 1, the simulator will display the reference route to be followed. If this variable is set to anything greater than 1 the engine will print the complete state of the simulation for debugging purposes.
  • CHECKPOINT_ENDPOINT (JSON) — The name of a file to serve as storage for partial results and the finals statistics.
  • RECORD_PATH (string) — Path to a folder that will store the CARLA logs. This is unset by default.
  • RESUME — Flag to indicate if the simulation should be resumed from the last route. This is unset by default.
  • CHALLENGE_TRACK_CODENAME (string) — Track in which the agent is competing. There are two possible options: SENSORS and MAP. The SENSORS track gives access to use multiple cameras, a LIDAR, a RADAR, a GNSS, an IMU, and a speedometer. In addition to these sensors, the MAP track allows for direct access to the OpenDRIVE HD map. You are responsible to parse and process the OpenDRIVE map as needed.

These environment variables are passed to ${LEADERBOARD_ROOT}/leaderboard/, which serves as the entry point to perform the simulation. Take a look at to find out more details on how your agent will be executed and evaluated.

3. Creating your own Autonomous Agent

The definition of a new agent starts by creating a new class that inherits from leaderboard.autoagents.autonomous_agent.AutonomousAgent.

3.1 Create get_entry_point

First, define a function called get_entry_point that returns the name of your new class. This will be used to automatically instantiate your agent.

from leaderboard.autoagents.autonomous_agent import AutonomousAgent

def get_entry_point():
    return 'MyAgent'

class MyAgent(AutonomousAgent):

3.2 Override the setup method

Within your agent class override the setup method. This method performs all the initialization and definitions needed by your agent. It will be automatically called when an instance of your agent is created. It can receive an optional argument pointing to a configuration file. Users are expected to parse this file. At a minimum, you need to specify in which track you are participating.

from srunner.autoagents.autonomous_agent import Track
    def setup(self, path_to_conf_file):
        self.track = Track.SENSORS # At a minimum, this method sets the Leaderboard modality. In this case, SENSORS

3.3 Override the sensors method

You will also have to override the sensors method, which defines all the sensors required by your agent.

def sensors(self):
  sensors = [{'type': '', 'x': 0.7, 'y': 0.0, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0, 'yaw': 0.0,
            'width': 300, 'height': 200, 'fov': 100, 'id': 'Center'},
            {'type': 'sensor.lidar.ray_cast', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0,
             'yaw': -45.0, 'id': 'LIDAR'},
            {'type': 'sensor.other.radar', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0,
             'yaw': -45.0, 'fov': 30, 'id': 'RADAR'},
            {'type': 'sensor.other.gnss', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'id': 'GPS'},
            {'type': 'sensor.other.imu', 'x': 0.7, 'y': -0.4, 'z': 1.60, 'roll': 0.0, 'pitch': 0.0,
             'yaw': -45.0, 'id': 'IMU'},
            {'type': 'sensor.opendrive_map', 'reading_frequency': 1, 'id': 'OpenDRIVE'},
           {'type': 'sensor.speedometer',  'reading_frequency': 20, 'id': 'SPEED'},
return sensors

Most of the sensor attributes have fixed values. These can be checked in This is done so that all the teams compete within a common sensor framework.

Every sensor is represented as a dictionary, containing the following attributes:

  • type: type of the sensor to be added.
  • id: the label that will be given to the sensor to be accessed later.
  • other attributes: these are sensor dependent, e.g.: extrinsics and fov.

Users can set both intrinsics and extrinsic parameters (location and orientation) of each sensor, in relative coordinates with respect to the vehicle. Please, note that CARLA uses the Unreal Engine coordinate system, which is: x-front, y-right, z-up.

The available sensors are:

  • — Regular camera that captures images.
  • sensor.lidar.ray_cast — Velodyne 64 LIDAR.
  • sensor.other.radar — Long-range RADAR (up to 100 meters).
  • sensor.other.gnss — GPS sensor returning geo location data.
  • sensor.other.imu — 6-axis Inertial Measurement Unit.
  • sensor.opendrive_map — Pseudosensor that exposes the HD map in OpenDRIVE format parsed as a string.
  • sensor.speedometer — Pseudosensor that provides an approximation of your linear velocity.

Trying to set another sensor or misspelling these, will make the set up fail.

You can use any of these sensors to configure your sensor stack. However, in order to keep a moderate computational load we have set some limits in the number of sensors that can be added to an agent.

  '': 4,
  'sensor.lidar.ray_cast': 1,
  'sensor.other.radar': 2,
  'sensor.other.gnss': 1,
  'sensor.other.imu': 1,
  'sensor.opendrive_map': 1,
  'sensor.speedometer': 1

Trying to set too many or less than zero units of a sensor, will make the set up fail.

There are also spatial restrictions that limit the placement of your sensors within the volume of your vehicle. If a sensor is located more than 3 meters away from its parent in any axis (e.g. [3.1,0.0,0.0]), the setup will fail.

3.4 Override the run_step method

This method will be called once per time step to produce a new action in the form of a carla.VehicleControl object. This control will be use to update your agent.

  • input_data: A dictionary containing raw sensor data for the requested sensors. This dictionary is indexed by the ids defined in the sensor method.

  • timestamp: A timestamp of the current simulation instant.

You also have access to the GPS route that the ego agent should travel to achieve its destination. The route is stored in the self._global_plan member.

[({'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002271601998707}, RoadOption.LEFT),
 ({'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002709765148996}, RoadOption.RIGHT),
 ({'z': 0.0, 'lat': 48.99822679980298, 'lon': 8.002735250105061}, RoadOption.STRAIGHT)]

The route is represented as a list of tuples. The first element of the tuple contains a waypoint, expressed as a latitude, a longitude, and a z component. Be aware that the distance between two consecutive waypoints could be of up to hundreds of meters. Do not rely on these waypoints as your main mechanism to navigate the environment.

The second element contains a high-level command. The set of available high-level commands is:

  • RoadOption.CHANGELANELEFT: Move one lane to the left.
  • RoadOption.CHANGELANERIGHT: Move one lane to the right.
  • RoadOption.LANEFOLLOW: Continue in the current lane.
  • RoadOption.LEFT: Turn left at the intersection.
  • RoadOption.RIGHT: Turn right at the intersection.
  • RoadOption.STRAIGHT: Keep straight at the intersection.

There could be ambiguous situations, where the semantics of left and right are not clear. In order to disambiguate these situations you should consider the GPS position of the next waypoints.

4. Training and testing your agent

We have prepared a set of predefined routes to serve as a starting point. You can use these routes for training and verifying the performance of your agent. Routes can be found in the folder {LEADERBOARD_ROOT}/data:

  • routes_training.xml: 50 routes intended to be used as training data (112.8 Km).
  • routes_testing.xml: 26 routes intended to be used as verification data (58.9 Km).

4.1 Baselines