Maintaining Subsystems

How to maintain subsystems within the main codebase.
Ysobel Sims GitHub avatar
Updated 18 Oct 2023

This page details how to maintain various subsystems within the main codebase.

Odometry

Webots Live Error Tracking

The nugus_controller in Webots has functionality to send live odometry ground truth data to the robot. To use this

  1. Set up Webots and compile the NUWebots code (you do not need the RoboCup set up).
  2. Run the kid.wbt world in Webots.
  3. Run webots/keyboardwalk in NUbots. See the Getting Started page to set up the NUbots codebase.
  4. Run NUsight or PlotJuggler and observe the graphs for the robot's prediction, the ground truth and the error of the torso's real rotation and translation relative to world.

Vision

The accuracy of the vision system is reliant on the accuracy of odometry and kinematics because they affect the placement of the mesh and green horizon. It is important that these systems work reasonable well otherwise the robot may have issues detecting objects.

If you are using Webots, you can turn on odometry ground truth in the SensorFilter module. Go to the SensorFilter.yaml configuration file and set the filtering_method to GROUND_TRUTH. This will use the ground truth odometry from Webots instead of the odometry from the robot. This is useful for testing the vision system without having to worry about odometry errors.

Dataset Generation

Synthetic and semi-synthetic training data for vision can be generated using NUpbr. Pre-generated datasets for training the Visual Mesh are on the NAS in the lab.

NUpbr

NUpbr is a Physically Based Rendering tool created in Blender. It creates semi-synthetic images with corresponding segmentation masks for training.

Information on NUpbr is available on the NUbook page, NUbook guide and the NUpbr GitHub repository.

Setting Up The Data

The Visual Mesh requires raw images, segmentation masks and metadata, as outlined on the Quick Start Guide. NUpbr can provide all of these as output, and premade data is available on the NAS. The data then needs to be converted to the tfrecord format using a script on the Visual Mesh repository. The Quick Start Guide describes how to use it.

The Visual Mesh

Training and Testing

Go to the NUbook Visual Mesh Getting Started guide to find out how to train and test a network, with an example dummy dataset.

Exporting Configuration

The resulting network should be exported to a yaml file and added to the NUbots codebase, by completing the following steps.

  1. Create a base configuration file. Example yaml files can be found in the Visual Mesh repository and in the NUbots repository.

  2. Export the weights of your trained Mesh to this configuration file using the following command, where <output_dir> is the directory of the configuration file:

    ./mesh.py export <output_dir>
  3. Add this configuration file to the NUbots repository in the VisualMesh module. Replace or add a configuration file depending on the use case of the Mesh - RobocupNetwork.yaml is for soccer playing on the real robot and WebotsNetwork is for soccer playing in the Webots simulator. View the Git Guide for information on using Git and submitting this change in a pull request.

Camera Calibration

The vision system cannot work optimally if the cameras are not calibrated correctly. The input page describes the camera parameters that can be calibrated.

An automatic camera calibration tool is available in the NUbots repository. See the camera calibration guide to find out how to use this tool.

Testing

After updating the Visual Mesh in the NUbots repository, it should be tested before merging. Refer to the Getting Started guide for assistance for the following steps.

  1. Build the code, ensuring ROLE_test-visualmesh is set to ON in ./b configure -i, and install it to the robot. Ensure the new configuration file is installed by using the -cu or -co options when installing - check out the Build System page to find out more about options when installing onto the robot.

  2. When your new Visual Mesh is installed onto the robot, connect to the robot using:

    ssh nubots@<address>
  3. Ensure the robot is sending vision data:

    nano config/NetworkForwarder.yaml

    CompressedImage, Balls, Goals and GreenHorizon should be on. Run NUsight using yarn prod and navigate to the NUsight page in your browser. More on NUsight can be found on the NUsight NUbook page. If you have not already set up and built NUsight, refer to the Getting Started page.

  4. Run the test/visualmesh role

    ./test/visualmesh
  5. Wait for the cameras to load and then watch the Vision tab in NUsight. To determine if the output is correct, consult the vision page for images of the expected output.

To see the Visual Mesh itself in NUsight, you will need to enable the message.vision.VisualMesh message in the NetworkForwarder.yaml file. Most of the time the networking should work, but sometimes there may be issues since the Visual Mesh data is large. If there are issues with seeing the Visual Mesh output in NUsight, you will need to log the data and run it back in NUsight using DataPlayback. Use the steps in the DataLogging and DataPlayback guide to record and playback data. Adjust the instructions for our purpose using the following hints:

  • In step 1 of Recording Data, use the test/visualmesh role to record the data.
  • In step 2 of Recording Data and step 4 of Playing Back Data, set message.output.CompressedImage to true and add message.vision.VisualMesh: true in both DataLogging.yaml and DataPlayback.yaml.
  • In steps 1, 2 and 5 of Playing Back Data, use the playback role to playback the data, without changes.

Tuning Detectors

Potentially, the Visual Mesh had positive results after training, but when used on a robot it performed poorly. In this case, the detectors may need tuning.

BallDetector.yaml and GoalDetector.yaml contain the values for tuning the ball and goal detectors respectively.

  1. Build and install the test/visualmesh role to a robot.

  2. SSH onto the robot.

  3. Enable NUsight messages on the robot by running

    nano config/NetworkForwarder.yaml

    and set message.vision.Balls and message.vision.Goals to true.

  4. Run NUsight using yarn prod on a computer. Set up NUsight using the Getting Started page if necessary.

  5. Run ./test/visualmesh on the robot.

  6. Alter the configuration file for the detectors while simultaneously running the binary on the robot. In a new terminal, SSH onto the robot again and run:

    nano config/BallDetector.yaml

    Change the values and upon saving, the changes will be used immediately by the robot without needing to rebuild or rerun the ./test/visualmesh binary.

  7. Repeat #6 for the goal detector by running

    nano config/GoalDetector.yaml

In general, it might be useful to adjust the confidence_threshold on both detectors to improve the results. Other variables may give better results with different values, except for log_level and the covariances (goal_projection_covariance and ball_angular_cov).

Benchmarks

Benchmark results for various aspects of the vision system. These benchmarks tell us how well the system performs and if a new method improves the system. In general, benchmarks should be recalculated when there may be a change in the results. The benchmarks should be verified every six months if no changes have been made, to ensure unrelated changes did not cause issues.

Visual Mesh

Test results from the Visual Mesh, broken down for each class with precision and recall values. The complete output from the Visual Mesh test can be found on the Google Drive, in the Benchmarks folder with a date. As well as the information provided on this page, the output contains graphs for each class for F1, Informedness, Markedness, MCC, MI, PR, Precision, Recall, ROC.

Visual Mesh benchmarks should be updated when a network is trained and added to the NUbots codebase, or if the Visual Mesh code updates in a way that would effect these values.

Real World

Coming soon...

Webots

Full metrics can be found in the Benchmarks folder on the NUbots Google Drive. Test dataset can be found on the NAS device. Should be updated if a new Webots Visual Mesh network replaces the old network, or if the RoboCup environment in Webots changes (a new network should be trained in this case).

Ball

Precision: 0.9552730904880183

Recall: 0.9474011799668628

Predicted Ball samples are really:

  • Ball: 95.527%
  • Goal: 0.198%
  • Line: 1.132%
  • Field: 2.163%
  • Robot: 0.902%
  • Environment: 0.078%

Real Ball samples are predicted as:

  • Ball: 94.740%
  • Goal: 0.506%
  • Line: 1.528%
  • Field: 1.077%
  • Robot: 2.031%
  • Environment: 0.117%
Goal

Precision: 0.9744150403727407

Recall: 0.9818396740798292

Predicted Goal samples are really:

  • Ball: 0.006%
  • Goal: 97.442%
  • Line: 0.176%
  • Field: 0.462%
  • Robot: 0.100%
  • Environment: 1.814%

Real Goal samples are predicted as:

  • Ball: 0.002%
  • Goal: 98.184%
  • Line: 0.146%
  • Field: 0.210%
  • Robot: 0.056%
  • Environment: 1.402%
Line

Precision: 0.9639229445906177

Recall: 0.9643768381560178

Predicted Line samples are really:

  • Ball: 0.036%
  • Goal: 0.280%
  • Line: 96.392%
  • Field: 2.885%
  • Robot: 0.365%
  • Environment: 0.042%

Real Line samples are predicted as:

  • Ball: 0.026%
  • Goal: 0.341%
  • Line: 96.438%
  • Field: 2.910%
  • Robot: 0.283%
  • Environment: 0.002%
Field

Precision: 0.9970544868367878

Recall: 0.9969097621584649

Predicted Field samples are really:

  • Ball: 0.001%
  • Goal: 0.017%
  • Line: 0.120%
  • Field: 99.705%
  • Robot: 0.074%
  • Environment: 0.082%

Real Field samples are predicted as:

  • Ball: 0.002%
  • Goal: 0.037%
  • Line: 0.119%
  • Field: 99.691%
  • Robot: 0.087%
  • Environment: 0.064%
Robot

Precision: 0.9772128370131125

Recall: 0.9823020939044803

Predicted Robot samples are really:

  • Ball: 0.006%
  • Goal: 0.015%
  • Line: 0.038%
  • Field: 0.283%
  • Robot: 97.721%
  • Environment: 1.936%

Real Robot samples are predicted as:

  • Ball: 0.003%
  • Goal: 0.026%
  • Line: 0.050%
  • Field: 0.244%
  • Robot: 98.230%
  • Environment: 1.447%
Environment

Precision: 0.9977592789558581

Recall: 0.9970136589036935

Predicted Environment samples are really:

  • Ball: 0.000%
  • Goal: 0.040%
  • Line: 0.000%
  • Field: 0.023%
  • Robot: 0.160%
  • Environment: 99.776%

Real Environment samples are predicted as:

  • Ball: 0.000%
  • Goal: 0.053%
  • Line: 0.001%
  • Field: 0.030%
  • Robot: 0.215%
  • Environment: 99.701%

Object Positions

Post-processing heuristics use Visual Mesh results to find the position of likely objects in the image. These benchmarks are the error between the real position and the calculated position in the three-dimensional world. This should be updated if the post-processing heuristics are updated, or if the Visual Mesh output changes.

Real World

Coming soon...

Webots

Coming soon...

NUbots acknowledges the traditional custodians of the lands within our footprint areas: Awabakal, Darkinjung, Biripai, Worimi, Wonnarua, and Eora Nations. We acknowledge that our laboratory is situated on unceded Pambalong land. We pay respect to the wisdom of our Elders past and present.
Copyright © 2024 NUbots - CC-BY-4.0
Deploys by Netlify