This page details how to maintain various subsystems within the main codebase.
nugus_controller in Webots has functionality to send live odometry ground truth data to the robot. To use this
- Set up Webots and compile the NUWebots code (you do not need the RoboCup set up).
- Run the
kid.wbtworld in Webots.
webots/keyboardwalkin NUbots. See the Getting Started page to set up the NUbots codebase.
- Run NUsight or PlotJuggler and observe the graphs for the robot's prediction, the ground truth and the error of the torso's real rotation and translation relative to world.
The accuracy of the vision system is reliant on the accuracy of odometry and kinematics because they affect the placement of the mesh and green horizon. It is important that these systems work reasonable well otherwise the robot may have issues detecting objects.
If you are using Webots, you can turn on odometry ground truth in the SensorFilter module. Go to the
SensorFilter.yaml configuration file and set the
GROUND_TRUTH. This will use the ground truth odometry from Webots instead of the odometry from the robot. This is useful for testing the vision system without having to worry about odometry errors.
Synthetic and semi-synthetic training data for vision can be generated using NUpbr. Pre-generated datasets for training the Visual Mesh are on the NAS in the lab.
NUpbr is a Physically Based Rendering tool created in Blender. It creates semi-synthetic images with corresponding segmentation masks for training.
The Visual Mesh requires raw images, segmentation masks and metadata, as outlined on the Quick Start Guide. NUpbr can provide all of these as output, and premade data is available on the NAS. The data then needs to be converted to the tfrecord format using a script on the Visual Mesh repository. The Quick Start Guide describes how to use it.
Go to the NUbook Visual Mesh Getting Started guide to find out how to train and test a network, with an example dummy dataset.
The resulting network should be exported to a yaml file and added to the NUbots codebase, by completing the following steps.
Export the weights of your trained Mesh to this configuration file using the following command, where
<output_dir>is the directory of the configuration file:./mesh.py export <output_dir>
Add this configuration file to the NUbots repository in the VisualMesh module. Replace or add a configuration file depending on the use case of the Mesh -
RobocupNetwork.yamlis for soccer playing on the real robot and
WebotsNetworkis for soccer playing in the Webots simulator. View the Git Guide for information on using Git and submitting this change in a pull request.
The vision system cannot work optimally if the cameras are not calibrated correctly. The input page describes the camera parameters that can be calibrated.
An automatic camera calibration tool is available in the NUbots repository. See the camera calibration guide to find out how to use this tool.
After updating the Visual Mesh in the NUbots repository, it should be tested before merging. Refer to the Getting Started guide for assistance for the following steps.
Build the code, ensuring
ROLE_test-visualmeshis set to
./b configure -i, and install it to the robot. Ensure the new configuration file is installed by using the
-cooptions when installing - check out the Build System page to find out more about options when installing onto the robot.
When your new Visual Mesh is installed onto the robot, connect to the robot using:ssh nubots@<address>
Ensure the robot is sending vision data:nano config/NetworkForwarder.yaml
CompressedImage, Balls, Goals and GreenHorizon should be on. Run NUsight using
yarn prodand navigate to the NUsight page in your browser. More on NUsight can be found on the NUsight NUbook page. If you have not already set up and built NUsight, refer to the Getting Started page.
Wait for the cameras to load and then watch the Vision tab in NUsight. To determine if the output is correct, consult the vision page for images of the expected output.
To see the Visual Mesh itself in NUsight, you will need to enable the
message.vision.VisualMesh message in the
NetworkForwarder.yaml file. Most of the time the networking should work, but sometimes there may be issues since the Visual Mesh data is large. If there are issues with seeing the Visual Mesh output in NUsight, you will need to log the data and run it back in NUsight using DataPlayback. Use the steps in the DataLogging and DataPlayback guide to record and playback data. Adjust the instructions for our purpose using the following hints:
- In step 1 of Recording Data, use the
test/visualmeshrole to record the data.
- In step 2 of Recording Data and step 4 of Playing Back Data, set
message.vision.VisualMesh: truein both DataLogging.yaml and DataPlayback.yaml.
- In steps 1, 2 and 5 of Playing Back Data, use the
playbackrole to playback the data, without changes.
Potentially, the Visual Mesh had positive results after training, but when used on a robot it performed poorly. In this case, the detectors may need tuning.
Build and install the
test/visualmeshrole to a robot.
SSH onto the robot.
Enable NUsight messages on the robot by runningnano config/NetworkForwarder.yaml
Run NUsight using
yarn prodon a computer. Set up NUsight using the Getting Started page if necessary.
./test/visualmeshon the robot.
Alter the configuration file for the detectors while simultaneously running the binary on the robot. In a new terminal, SSH onto the robot again and run:nano config/BallDetector.yaml
Change the values and upon saving, the changes will be used immediately by the robot without needing to rebuild or rerun the
Repeat #6 for the goal detector by runningnano config/GoalDetector.yaml
In general, it might be useful to adjust the
confidence_threshold on both detectors to improve the results. Other variables may give better results with different values, except for
log_level and the covariances (
Benchmark results for various aspects of the vision system. These benchmarks tell us how well the system performs and if a new method improves the system. In general, benchmarks should be recalculated when there may be a change in the results. The benchmarks should be verified every six months if no changes have been made, to ensure unrelated changes did not cause issues.
Test results from the Visual Mesh, broken down for each class with precision and recall values. The complete output from the Visual Mesh test can be found on the Google Drive, in the Benchmarks folder with a date. As well as the information provided on this page, the output contains graphs for each class for F1, Informedness, Markedness, MCC, MI, PR, Precision, Recall, ROC.
Visual Mesh benchmarks should be updated when a network is trained and added to the NUbots codebase, or if the Visual Mesh code updates in a way that would effect these values.
Full metrics can be found in the Benchmarks folder on the NUbots Google Drive. Test dataset can be found on the NAS device. Should be updated if a new Webots Visual Mesh network replaces the old network, or if the RoboCup environment in Webots changes (a new network should be trained in this case).
Predicted Ball samples are really:
- Ball: 95.527%
- Goal: 0.198%
- Line: 1.132%
- Field: 2.163%
- Robot: 0.902%
- Environment: 0.078%
Real Ball samples are predicted as:
- Ball: 94.740%
- Goal: 0.506%
- Line: 1.528%
- Field: 1.077%
- Robot: 2.031%
- Environment: 0.117%
Predicted Goal samples are really:
- Ball: 0.006%
- Goal: 97.442%
- Line: 0.176%
- Field: 0.462%
- Robot: 0.100%
- Environment: 1.814%
Real Goal samples are predicted as:
- Ball: 0.002%
- Goal: 98.184%
- Line: 0.146%
- Field: 0.210%
- Robot: 0.056%
- Environment: 1.402%
Predicted Line samples are really:
- Ball: 0.036%
- Goal: 0.280%
- Line: 96.392%
- Field: 2.885%
- Robot: 0.365%
- Environment: 0.042%
Real Line samples are predicted as:
- Ball: 0.026%
- Goal: 0.341%
- Line: 96.438%
- Field: 2.910%
- Robot: 0.283%
- Environment: 0.002%
Predicted Field samples are really:
- Ball: 0.001%
- Goal: 0.017%
- Line: 0.120%
- Field: 99.705%
- Robot: 0.074%
- Environment: 0.082%
Real Field samples are predicted as:
- Ball: 0.002%
- Goal: 0.037%
- Line: 0.119%
- Field: 99.691%
- Robot: 0.087%
- Environment: 0.064%
Predicted Robot samples are really:
- Ball: 0.006%
- Goal: 0.015%
- Line: 0.038%
- Field: 0.283%
- Robot: 97.721%
- Environment: 1.936%
Real Robot samples are predicted as:
- Ball: 0.003%
- Goal: 0.026%
- Line: 0.050%
- Field: 0.244%
- Robot: 98.230%
- Environment: 1.447%
Predicted Environment samples are really:
- Ball: 0.000%
- Goal: 0.040%
- Line: 0.000%
- Field: 0.023%
- Robot: 0.160%
- Environment: 99.776%
Real Environment samples are predicted as:
- Ball: 0.000%
- Goal: 0.053%
- Line: 0.001%
- Field: 0.030%
- Robot: 0.215%
- Environment: 99.701%
Post-processing heuristics use Visual Mesh results to find the position of likely objects in the image. These benchmarks are the error between the real position and the calculated position in the three-dimensional world. This should be updated if the post-processing heuristics are updated, or if the Visual Mesh output changes.