Localisation

Documentation for each localisation module in the main NUbots codebase
Updated 23 Mar 2026

BallLocalisation

Description

This module takes in a list of vision balls, uses the ball measurement closest to our current estimate and applies an Unscented Kalman Filter to estimate the balls position and velocity in world space.

Usage

Include this module to allow the robot to estimate the balls position and velocity.

Consumes

  • message::vision::Balls uses the ball position estimate from vision
  • message::input::Sensors uses sensors to compute transform from camera {c} to torso space {t}
  • message::support::FieldDescription uses field description to obtain height of ball off the ground

Emits

  • message::localisation::Ball contains filtered ball position measurement

FieldLocalisationNLopt

Description

A localisation method for estimating the where the field is in world space, which relies on field line points and field line intersections using non-linear optimisation.

Optimisation Setup

The optimisation framework integrates several cost components and constraints to compute the optimal field localisation state:

Cost Components

  1. Field Line Alignment Cost (JflJ_{\text{fl}}):

    • Measures the alignment of predicted field lines with observed ones.
    • Calculated as the squared Euclidean distance between field line points and the nearest point on any observed line, scaled by a predefined weight and divided by the number of field lines:

    Jfl=wfli=1N(dist(robs,i,rline))2NJ_{\text{fl}} = \frac{w_{\text{fl}} \sum_{i=1}^{N} \left( \text{dist}(r_{\text{obs},i}, r_{\text{line}}) \right)^2}{N}

    • Points off the field are given a constant weight set in configuration.
  2. Field Line Intersection Cost (JfiJ_{\text{fi}}):

    • Assesses the accuracy of predicted field line intersections against observed intersections.
    • Computed similarly through the squared distances between predicted and observed intersections:

Jfi=wfij=1M(dist(rint,j,robs,j))2MJ_{\text{fi}} = \frac{w_{\text{fi}} \sum_{j=1}^{M} \left( \text{dist}(r_{\text{int},j}, r_{\text{obs},j}) \right)^2}{M}

  1. State Change Cost (JscJ_{\text{sc}}):
    • Penalises large deviations from the initial state estimate to ensure temporal consistency.
    • Expressed as:

Jsc=wscxx_init2J_{\text{sc}} = w_{\text{sc}} \|\textbf{x} - \textbf{x}\_{\text{init}}\|^2

Constraints

The optimisation is subject to the following constraints:

  • State Bounds:

    • Limits the allowable state changes between optimisation steps to ensure the solution does not jump an unrealisic amount between updates

x_initΔxxx_init+Δx\textbf{x}\_{\text{init}} - \Delta \mathbf{x} \leq \textbf{x} \leq \textbf{x}\_{\text{init}} + \Delta \textbf{x}

  • Here, Δx\Delta \textbf{x} represents the maximum allowable change in each state dimension (x, y, and θ\theta).

  • Minimum Field Line Points:

    • The algorithm requires a minimum number of field line points to run the optimisation to ensure sufficient data for accurate estimation:

Count(field line points)Min points\text{Count}(\text{field line points}) \geq \text{Min points}

  • Robot Stability:
    • Optimisation will not proceed if the robot is in an unstable state (e.g., falling):

stability>FALLING\text{stability} > \text{FALLING}

Optimisation Algorithm

  • The overall cost function optimised is:

J(x)=Jfl+Jfi+JscJ(\textbf{x}) = J_{\text{fl}} + J_{\text{fi}} + J_{\text{sc}}

Where:

  • x=[x,y,θ]T\textbf{x} = [x, y, \theta]^T represents the state vector.
  • wflw_{\text{fl}}, wfiw_{\text{fi}}, and wscw_{\text{sc}} are weights controlling the relative importance of each cost component.

Optimisation is carried out using NLopt's COBYLA (Constrained Optimisation BY Linear Approximations) algorithm, respecting the constraints and bounds set on the changes allowed in the state to ensure plausible and robust field localisation.

Cost Threshold and Update Acceptance

The module uses a cost threshold (cost_threshold) to determine whether to accept optimisation results:

  • Accepting Updates: Optimisation results are only applied if their cost is below cost_threshold. This prevents poor localisations from corrupting the state estimate.
  • Rejecting Updates: When the cost exceeds the threshold, the optimisation result is rejected and the previous filtered state is maintained. A warning is logged and a counter is incremented.
  • Triggering Resets: If the cost exceeds the threshold for max_over_cost consecutive updates (and reset_delay has elapsed), an uncertainty reset is triggered.

This mechanism provides robustness against temporary vision anomalies or ambiguous field features while maintaining accurate localisation when observations are reliable.

Reset

A soft reset ResetFieldLocalisation is used when the robot is starting on the side of the field (or in a custom position specified in the configuration).

A more extreme reset UncertaintyResetFieldLocalisation is used when the cost of the current field position is high during play and either a local search or field-wide search must be conducted to regain the position. The local search is a grid search using configurable parameters and uses the lowest cost position if it is under the configurable cost threshold. If this fails to find an appropriate position, a global grid search is conducted. This grid search occurs over the half of the field that the robot was last in, so as to ignore mirror states.

Usage

Include this module to allow the robot to estimate where the field is in world space.

Consumes

  • message::vision::FieldLines field line points are used in the field localisation cost function
  • message::vision::FieldLineIntersections field line intersections are used in the field localisation cost function
  • message::vision::Goals goal positions are used in the field localisation cost function
  • message::support::FieldDescription to determine the field map and bounds of the field

Emits

  • message::localisation::Field contains the estimated (x, y, theta) state
  • message::localisation::ResetFieldLocalisation signalling a side-of-field localisation reset
  • message::localisation::UncertaintyResetFieldLocalisation signalling a local or field-wide reset, which is a computationally intensive action
  • message::localisation::FinishReset signalling that a local or field-wide reset has completed

Dependencies

  • Eigen
  • utility::math::stats::MultivariateNormal Utility for sampling from a multivariate normal distribution

Mocap

Description

The Mocap module processes motion capture data to provide ground truth robot pose information. It receives motion capture data containing rigid body positions and orientations, filters for the specific robot rigid body, and converts this data into a robot pose ground truth message.

Usage

The module requires configuration through Mocap.yaml with the following parameters:

  • robot_rigid_body_id: The ID of the rigid body that represents the robot in the motion capture system

Enable ground truth localisation in SensorFilter and FieldLocalisationNLopt module to use this data directly for pose estimation (odometry/localisation)

Consumes

  • message::input::MotionCapture: Motion capture data containing positions and orientations of tracked rigid bodies
  • extension::Configuration: Configuration data from Mocap.yaml

Emits

  • message::localisation::RobotPoseGroundTruth: The ground truth pose of the robot, containing:
    • Transformation matrix (Hft)

RobotLocalisation

Description

Estimates the position and velocity of other robots on the field.

The module works by tracking multiple robots using a UKF filter for each. Vision measurements are associated with each tracked robot using global nearest neighbor and an acceptance radius. Messages over the network from other robots are used to determine whether a tracked robot is a teammate or opponent and to update the filter.

Tracked robots are discarded if they are not seen for a consecutive number of times when they should be visible.

Usage

Incluide this role to track other robots on the field.

Consumes

  • message::vision::Robots uses the robot position estimates from vision
  • message::input::RoboCup uses teammate position from their WiFi message
  • message::vision::GreenHorizon uses the GreenHorizon to manage tracked robots
  • message::localisation::Field uses the field transformation matrix Hfw to get the location of the tracked robots in field space
  • message::support::FieldDescription uses the field dimensions to determine whether the robot is outside the field (plus a given distance outside the field)
  • message::input::GameState to get the robot's team colour for visualisation in NUsight

Emits

  • message::localisation::Robots contains filtered robot positions and velocity estimates
Modules
Input
Modules
Network
NUbots acknowledges the traditional custodians of the lands within our footprint areas: Awabakal, Darkinjung, Biripai, Worimi, Wonnarua, and Eora Nations. We acknowledge that our laboratory is situated on unceded Pambalong land. We pay respect to the wisdom of our Elders past and present.
Copyright © 2026 NUbots - CC-BY-4.0
Deploys by Netlify