10. Single Drone on Local Machine Using Cluster Architecture¶
Follow these instructions for quick and easy testing of controllers on a single drone on a single local machine. Use in the following scenarios:
- Local development of single (or multiple) drone applications in simulation (gazebo)
- Local development of offboard (drone control software on pc not on drone)
- Local development of distributed onboard (drone control software running on drone itself)
- Testing controllers/software on the real drone software/ communications architecture that would be used in the BRL.
This is considered to be step 2a for the Starling development process.
Note: Reading the background may be useful but not necessary.
- 10. Single Drone on Local Machine Using Cluster Architecture
- 10.1 Contents
- 10.2 Drone and Simulator on a local cluster
- 10.3 Controlling the Drones
- 10.4 Development on the Drone and Simulator
- 10.5 Troubleshooting/ FAQs
10.2 Drone and Simulator on a local cluster¶
First check that you have installed the single prerequisit of
docker, see Getting Started
10.2.1 Starting the cluster¶
In the root directory run one of the following in a terminal
./run_k3s.sh ./run_k3s.sh -ow # Will automatically open the UI webpages for you.
This will start the following:
- Install the cluster root node which governs the running of all the other parts.
- A node running the Gazebo simulation environment
- A node running the following:
- An initialisation routine spawning 1 Iris quadcopter model
- A SITL (Software In The Loop) instance running the PX4 autopilot.
- A MAVROS node connected to the SITL instance
- A simple UI with a go and estop button.
- The cluster control dashboard (printing out the access key string)
Note: this might take a while on first run as downloads are required.
Note: this installation will ask for root (sudo) permission when necessary.
The User Interfaces are available in the following locations:
- Go to
http://localhost:8080in a browser to (hopefully) see the gazebo simulator.
- Go to
http://localhost:3000in a browser to see the starling user interface containing go/stop buttons.
- Go to
http://localhost:31771in a browser to see the cluster dashboard. There is a lot here, and this guide will point you to the key functions. Please see this page for further details.
- Your browser of choice may not like the web-page and complain of certificate errors. Please ignore this and continue onwards. You may have to click 'advanced' or a similar option for the browser to let you in.
- To log in to the site, you will need the long login Token which is hopefully displayed by
run_k3s.sh. This token should also be automatically placed onto your clipboard for pasting.
Note: All specified sites can be accessed from other machines by replacing
localhostwith your computer's IP address.
Note: Sometimes it might take a bit of time for the UIs to become available, give it a minute and refresh the page. With Gazebo you may accidentally be too zoomed in, or the grid may not show up. Use the mouse wheel to zoom in and out. The grid can be toggled on the left hand pane.
10.2.2 Restarting or deleting the drone or simulator in the cluster¶
There may be cases where you wish to restart or refresh either the software running on the drones or the simulator itself (e.g. removing old models):
./run_k3s.sh -d # or --delete will remove the gazebo and drone instances ./run_k3s.sh -r # or --restart will restart the gazebo and drone instances
Note: you can also add the
-skcommand which will skip the k3s re-download step and the dashboard check, i.e.
./run_k3s.sh -sk -r
If you wish to remove the cluster and all associated software from the machine, you will need to uninstall:
Note: This will remove everything to do with the starling cluster. The dashboard access token will be deleted. The container images will remain on your machine, but to remove those as well run
docker system prune --volumes.
10.2.3 Accessing logs on the dashboard¶
Please see the instructions here
10.3 Controlling the Drones¶
10.3.1 Offboard Control¶
There are two supported methods for offboard control of either the SITL or real drones.
- Control drone directly via Mavlink, by Ground Control Station (GCS) or other Mavlink compatible method (e.g. Dronekit).
- Control drone via ROS2 node
10.3.1.1 1. Connecting a Ground Control Station via Mavlink¶
If a mavros or sitl instance is running, there will be a GCS link on
udp://localhost:14553 (hopefully). This means that you can run a GCS such as QGroundControl or Mission Planner:
- Create a comms link to
- The GCS should auto detect the drone(s)
- You should be able to control and monitor any sitl drone through the standard mavlink interface.
This is a quick an easy way to control the SITL instance via Mavlink.
10.3.1.2 2. Running Example ROS2 Offboard Controller node¶
An example offboard ROS2 controller can then be connected to SITL by running the following in a terminal:
This will first build the example controller so it is available locally. Then deploy the example controller to the cluster. It will take a few minutes to startup.
When run, the example will confirm in the terminal that it has connected and that it is waiting for mission start. To start the mission, press the green go button in the starling user interface which will send a message over
/mission_start topic. A confirmation message should appear in the terminal, and the drone will arm (drone propellors spinning up) and takeoff. It will fly in circles for 10 seconds before landing and disarming.
Once the controller has completed, the process will exit and the controller will restart, allowing you to repeat the controller.
If used with multiple vehicles, it will automatically find all drones broadcasting mavros topics, and start a controller for each one.
To remove or restart the controller, use the
-r options respectively with the script.
./scripts/start_example_controller.sh -d # Delete or Remove controller ./scripts/start_example_controller.sh -r # Restart controller
10.3.2 Onboard Control¶
TODO: Implement example onboard controller and associated scripts
10.4 Development on the Drone and Simulator¶
10.4.1 Useful Scripts:¶
There are a number of useful scripts in the
/scripts directory of this repository. Scripts can be run from any location, but for this tutorial we assume the user is in the root directory.
./scripts/start_k3s.sh- This starts the cluster
-uwill uninstall the cluster from the machine (and remove any running processes)
-skwill skip the installation check for k3s
./scripts/start_single_px4sitl_gazebo.sh- This starts the starling core functions (Dashboard and UI). It also starts the gazebo simulator and a 'drone' process running the PX4 SITL and a connected Mavros ROS2 node. Assumes the cluster has been started.
-dwill stop the gazebo and all connected 'drone' processes only (use if reloading the controller).
-rwill restart the gazebo and all connected 'drone' processes. labels: app: nginx
-skwill skip the starting/check that the starling core functions are running.
-owwill automatically open the UI webpages.
./scripts/start_starling_base.sh- This starts the starling user interface and dashboard. Automatically run by
./scripts/start_dashboard.sh- This starts the dashboard process on the cluster. Assumes the cluster has already been started. Automatically run by
-owwill automatically open the UI webpages.
./run_k3sscript internally runs
start_single_px4sitl_gazebo.sh. Any options passed will be forwarded to the relevant script.
10.4.2 Modifying the example controller¶
In the controllers folder there is an example_controller_python which you should have seen in action in the example above. The ROS2 package is in example_controller_python. Any edits made to the ROS2 package can be built by running
make in the controllers directory. This will use colcon build to build the node and output a local image named
Inside the controllers folder, there is an annotated kubernetes config file
k8.example_controller_python.amd64.yaml. This specifies the deployment of a pod which contains your local
example_controller_python image (this line).
Similar to before you can start up the local example controller by using the script:
But if you have made a copy, or wish to run your own version of the configuration, you can manually deploy your controller by running the following:
k3s apply -f k8.example_controller_python.amd64.yaml k3s delete -f k8.example_controller_python.amd64.yaml # To delete
See kubernetes configuration for more details.
For debugging, you can both see the logs, and execute on your controller container through the dashboard. See instructions here
Inside you can
source install/setup.bash and run ROS2 commands like normal.
10.4.3 Creating your own from scratch¶
Of course you can create your own controller from scratch. Inside your controller repository, the following is required
1. Your ROS2 package folder (what would usually go inside the
2. A Dockerfile (named
Dockerfile) which is dervied
FROM uobflightlabstarling/starling-controller-base, use the example Dockerfile as a template.
3. A Kubernetes YAML config file specifying either a Pod or a deployment. Use the example
k8.example_controller_python.amd64.yaml as a template. There are annoated comments. Also see here for more details.
Your Dockerfile can be built by running the following in the directory with the Dockerfile.
docker build -t <name of your controller> .
Once built, the configuration file must have a container config specifying your own image name.
Your container can then be deployed manually using
kubectl -f <your config>.yaml as above.