# Overview

The main goal of the CARLA Autonomous Driving Leaderboard is to evaluate the driving proficiency of autonomous agents in realistic traffic situations. The leaderboard serves as an open platform for the community to perform fair and reproducible evaluations, simplifying the comparison between different approaches.

Autonomous agents have to drive through a set of predefined routes. For each route, agents will be initialized at a starting point and will be directed to drive to a destination point, provided with a description of the route. Routes will happen in a variety of areas, including freeways, urban scenes, and residential districts.

Agents will face multiple traffic situations in these areas, such as:

• Lane merging
• Lane changing
• Negotiations at traffic intersections
• Handling traffic lights and traffic signs
• Coping with pedestrians, cyclists and other elements

These traffic situations are based on the NHTSA typology. The full list of traffic situations used in the leaderboard can be reviewed here.

Agents will be evaluated in a variety of weather conditions, including daylight scenes, sunset, rain, fog, and night, among others.

# Participation modalities

The leaderboard offers two participation modalities, SENSORS and MAP. These modalities differ in the type of input data that your agent can request from the platform.

### SENSORS track

• sensor.camera: RGB camera sensor (up to 4 units)
• sensor.lidar.ray_cast: Velodyne 64 LIDAR sensor (up to 1 unit)
• sensor.other.gnss: GPS sensor (up to 1 unit)
• sensor.other.imu: 6-axis Inertial Measurement Unit (up to 1 unit)
• sensor.speedometer: A sensor that provides an approximation of your linear velocity (up to 1 unit)

Notice that the number of units of each sensor is limited. This is done in order to keep the computational budget under control.

### MAP track

In addition to the sensors available on the SENSORS track, agents can request to access an HD map . The HD map is provided as an OpenDRIVE file encoded in a JSON string.

You are fully responsible to parse or convert this file into a representation that can be useful to your agent.

The complete list of sensors for this modality is:

• sensor.camera: RGB camera sensor (up to 4 units)
• sensor.lidar.ray_cast: Velodyne 64 LIDAR sensor (up to 1 unit)
• sensor.other.gnss: GPS sensor (up to 1 unit)
• sensor.other.imu: 6-axis Inertial Measurement Unit (up to 1 unit)
• sensor.speedometer: A sensor that provides an approximation of your linear velocity (up to 1 unit)
• sensor.opendrive_map: A sensor that exposes the HD map in OpenDRIVE format as a JSON string (up to 1 unit)

In addition to these sensors, all agents will receive a high-level route description indicating the path to follow in order reach the destination. The route is represented as a list of tuples. The first element of the tuple contains a waypoint, expressed as a latitude, a longitude, and a z component. Be aware that the distance between two consecutive waypoints could be of up to hundreds of meters. Do not rely on these waypoints as your main mechanism to navigate the environment.

[({'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002271601998707}, RoadOption.LEFT),
({'z': 0.0, 'lat': 48.99822669411668, 'lon': 8.002709765148996}, RoadOption.RIGHT),
...
({'z': 0.0, 'lat': 48.99822679980298, 'lon': 8.002735250105061}, RoadOption.STRAIGHT)]

The second element contains a high-level command. The set of available high-level commands is:

• RoadOption.LANEFOLLOW: Continue in the current lane.
• RoadOption.LEFT: Turn left at the intersection.
• RoadOption.RIGHT: Turn right at the intersection.
• RoadOption.STRAIGHT: Keep straight at the intersection.
• RoadOption.CHANGELANELEFT: Move one lane to the left.
• RoadOption.CHANGELANERIGHT: Move one lane to the right.

There could be ambiguous situations, where the semantics of left and right are not clear. In order to disambiguate these situations you should consider the GPS position of the next waypoints.

# Evaluation and metrics

The driving proficiency of an agent can be characterized by multiple metrics. For this leaderboard we have selected a set of metrics that help understand different aspects of driving.

## Driving score

This is the main metric of the leaderboard, serving as an aggregate of the average route completion and the number of traffic infractions.

where $N$ is the number of routes, $R_i$ is the percentage of completion of the $i-th$ route, and $P_i$ is the infraction penalty of the route. $P_i$ is obtained by multiplying the coefficients of all the infractions occurred during the route:

These values of these penalties are defined as:

PENALTY_COLLISION_PEDESTRIAN = 0.50
PENALTY_COLLISION_VEHICLE = 0.60
PENALTY_COLLISION_STATIC = 0.65
PENALTY_TRAFFIC_LIGHT = 0.70
PENALTY_STOP = 0.80

## Route completion

This metric focuses on the percentage of route distance completed by an agent, averaged across $N$ routes.

## Infraction penalty

This metric aggregates the number of infractions triggered by an agent as a geometric series. An ideal agent will obtain a score of 1.0, demonstrating perfect autonomous driving on these routes.

## Other infractions and events

The CARLA leaderboard also offers individual metrics for the following infractions and events:

• Collisions with pedestrians
• Collisions with other vehicles
• Collisions with layout (static scene)
• Running a red light
• Running a stop sign
• Number of off-road driving events
• Number of route deviation events
• Number of simulation timeouts
• Number of times the agent got blocked

All these metrics are given as the number of events per Km.

Agents will be penalized if they deviate from the assigned route. Agents will also be penalized if they get stuck in a route without taking any actions for more than 90 simulation seconds. In these cases the route will be interrupted, preventing the agent to continue with the route.

Additionally, if an agent drives off-road, that percentage of the route will not be considered towards the computation of the route completion score.

# Where do I start?

In order to get familiar with the leaderboard we recommend you to read carefully through the Get started section. Please, spend enough time making sure everything works as expected locally.

Once you are ready, check the Submit section to learn how to prepare your submission.

# Terms and Conditions

The CARLA Autonomous Driving Leaderboard is offered for free as a service to the research community thanks for the generosity of our sponsors and collaborators.

Teams are provided with a time budget (currently 120 hours) to evaluate their submissions. Each submission will be evaluated in AWS using a g3.8xlarge instance. This gives users access to a dedicated node with a modern GPU and CPU.

Teams are provided a number of submissions (currently 20 submissions) for a given a month. Both budgets are automatically refilled every month.

The organization of the CARLA leaderboard reserves the right to assign additional budget to a team. The organization also reserves the right to modify the default values of the monthly time budget and/or the number of submissions.

It is strictly prohibited to misuse or attack the infrastructure of the CARLA leaderboard, including all software and hardware that is used to run the service. Actions that deviate from the spirit of the CARLA leaderboard could result in the termination of a team.