Eclipse openDuT
Eclipse openDuT provides an open framework to automate the testing and validation process for automotive software and applications in a reliable, repeatable and observable way. Eclipse openDuT is hardware-agnostic with respect to the execution environment and accompanies different hardware interfaces and standards regarding the usability of the framework. Thereby, it is supporting both on-premise installations and hosting in a cloud infrastructure. Eclipse openDuT considers an eventually distributed network of real (HIL) and virtual devices (SIL) under test. Eclipse openDuT reflects hardware capabilities and constraints along with the chosen test method. Test cases are not limited to a specific domain, but it especially realizes functional and explorative security tests.
User Manual
Learn how to use openDuT and its individual components.
User Manual for CARL
CARL provides the backend service for openDuT. He manages information about all the DUTs and coordinates how they are put configured.
CARL also serves a web frontend, called LEA, for this purpose.
Setup of CARL
Currently, our setup is automated via Docker Compose.
If you want to use CARL and its components on a separate machine, i.e. a Raspberry PI or any other machine, this guide will show all necessary steps, to get CARL up and running.
- Install Git, if not already installed and checkout OpenDuT repository
git clone https://github.com/eclipse-opendut/opendut.git
- Install
docker.io
anddocker-compose-v2
- Optional: Change the docker image location CARL should be pulled from in
.ci/deploy/localenv/docker-compose.yml
. By default, CARL is pulled fromghcr.io
. - Set
/etc/hosts
file: Add the following lines to the/etc/hosts
file on the host system to access the services from the local network.192.168.56.9 opendut.local 192.168.56.9 auth.opendut.local 192.168.56.9 netbird.opendut.local 192.168.56.9 netbird-api.opendut.local 192.168.56.9 signal.opendut.local 192.168.56.9 carl.opendut.local 192.168.56.9 nginx-webdav.opendut.local 192.168.56.9 opentelemetry.opendut.local 192.168.56.9 monitoring.opendut.local
- Start the local test environment using docker compose.
In this step secrets are going to be created and all containers are getting started.# configure project path export OPENDUT_REPO_ROOT=$(git rev-parse --show-toplevel) # start provisioning and create .env file docker compose --file ${OPENDUT_REPO_ROOT:-.}/.ci/deploy/localenv/docker-compose.yml up --build provision-secrets # start the environment docker compose --file ${OPENDUT_REPO_ROOT:-.}/.ci/deploy/localenv/docker-compose.yml --env-file ${OPENDUT_REPO_ROOT:-.}/.ci/deploy/localenv/data/secrets/.env up --detach --build
The secrets which were created during the firstdocker compose
command can be found in.ci/deploy/localenv/data/secrets/.env
.
If everything worked and is up and running, you can follow the EDGAR Setup Guide.
Shutdown the environment
- Stop the local test environment using docker compose.
docker compose --file ${OPENDUT_REPO_ROOT:-.}/.ci/deploy/localenv/docker-compose.yml down
- Destroy the local test environment using docker compose.
docker compose --file ${OPENDUT_REPO_ROOT:-.}/.ci/deploy/localenv/docker-compose.yml down --volumes
Configuration
- If you followed the setup guide for CARL, there is no need to manually create this
carl.toml
file. - To configure CARL, you can create a configuration file under
/etc/opendut/carl.toml
.
The possible configuration values and their defaults can be seen here:
[network]
bind.host = "0.0.0.0"
bind.port = 8080
remote.host = "localhost"
remote.port = 8080
[network.tls]
enabled = true
certificate = "/etc/opendut/tls/carl.pem"
key = "/etc/opendut/tls/carl.key"
ca = "/etc/opendut/tls/ca.pem"
[network.oidc]
enabled = false
[network.oidc.client]
id = "tbd"
secret = "tbd"
# issuer url that CARL uses
issuer.url = "https://keycloak.internal/realms/opendut/"
# issuer url that CARL tells the clients to use (required in test environment)
issuer.remote.url = "https://keycloak.internal/realms/opendut/"
issuer.admin.url = "https://keycloak.internal/admin/realms/opendut/"
scopes = ""
[network.oidc.lea]
client.id = "opendut-lea-client"
issuer.url = "https://keycloak.internal/realms/opendut/"
scopes = "openid,profile,email"
[persistence]
enabled = false
[persistence.database]
url = "" # e.g. postgresql://example.com/carl
username = ""
password = ""
[peer]
disconnect.timeout.ms = 30000
can.server_port_range_start = 10000
can.server_port_range_end = 20000
ethernet.bridge.name.default = "br-opendut"
[serve]
ui.directory = "opendut-lea/"
[vpn]
enabled = true
kind = ""
[vpn.netbird]
url = ""
ca = ""
auth.type = ""
auth.secret = ""
timeout.ms = 10000
retries = 5
setup.key.expiration.ms = 86400000
[logging]
stdout = true
[opentelemetry]
enabled = false
collector.endpoint = ""
service.name = "opendut-carl"
[opentelemetry.metrics]
interval.ms = 60000
cpu.collection.interval.ms = 5000
EDGAR
EDGAR hooks your DuT up to openDuT. It is a program to be installed on a Linux host, which is placed next to your ECU. A single-board computer, like a Raspberry Pi, is good enough for this purpose.
Within openDuT, EDGAR is a Peer in the network.
Setup
1. Preparation
Make sure, you can reach CARL from your target system.
For example, if CARL is hosted at carl.opendut.local
, these two commands should work:
ping carl.opendut.local
curl https://carl.opendut.local
If you're self-hosting CARL, follow the instructions in Self-Hosted Backend Server.
2. Download EDGAR
In the LEA web-UI, you can find a Downloads-menu in the sidebar.
You will then need to transfer EDGAR to your target system, e.g. via scp
.
Alternatively, you can download directly to your target host with:
curl https://$CARL_HOST/api/edgar/$ARCH/download --output opendut-edgar.tar.gz
Replace $CARL_HOST
with the domain where your CARL is hosted,
and replace $ARCH
with the appropriate CPU architecture.
Available CPU architectures are:
x86_64-unknown-linux-gnu
(most desktop PCs and server systems)armv7-unknown-linux-gnueabihf
(Raspberry Pi)aarch64-unknown-linux-gnu
(ARM64 systems)
3. Unpack the archive
Run these commands to unpack EDGAR and change into the directory:
tar xf opendut-edgar.tar.gz
cd opendut-edgar/
EDGAR should print version information, if you run:
./opendut-edgar --version
If this prints an error, it is likely that you downloaded the wrong CPU architecture.
4. CAN Setup
If you want to use CAN, follow the steps in CAN Setup before continuing.
5. Plugins
Depending on your target hardware, you might want to use plugins to perform additional setup steps. If so, follow the steps in Plugins before continuing.
6. Scripted Setup
EDGAR comes with a scripted setup, which you can initiate by running:
./opendut-edgar setup managed <SETUP-STRING>
You can get the <SETUP-STRING>
from LEA or CLEO after creating a Peer.
This will configure your operating system and start the EDGAR Service, which will receive its configuration from CARL.
CAN Setup
If you want to use CAN, it is mandatory to set the environment variable OPENDUT_EDGAR_SERVICE_USER
as follows:
export OPENDUT_EDGAR_SERVICE_USER=root
When a cluster is deployed, EDGAR automatically creates a virtual CAN interface (by default: br-vcan-opendut
) that is used as a bridge between Cannelloni instances and physical CAN interfaces. EDGAR automatically connects all CAN interfaces defined for the peer in CARL to this bridge interface.
This also works with virtual CAN interfaces, so if you do not have a physical CAN interface and want to test the CAN functionality nevertheless, you can create a virtual CAN interface as follows. Afterwards, you will need to configure it for the peer in CARL.
# Optionally, replace vcan0 with another name
ip link add dev vcan0 type vcan
ip link set dev vcan0 up
Preparation
EDGAR relies on the Linux socketcan stack to perform local CAN routing and uses Cannelloni for CAN routing between EDGARs. Therefore, we have some dependencies.
- Install the following packages:
sudo apt install -y can-utils
- Download Cannelloni from here: https://github.com/eclipse-opendut/cannelloni/releases/
- Unpack the Cannelloni tarball and copy the files into your filesystem like so:
sudo cp libcannelloni-common.so.0 /lib/
sudo cp libsctp.so* /lib/
sudo cp cannelloni /usr/local/bin/
Testing
When you configured everything and deployed the cluster, you can test the CAN connection between different EDGARs as follows:
- Execute on EDGAR leader, assuming the configured CAN interface on it is
can0
:candump -d can0
- On EDGAR peer execute (again, assuming can0 is configured here):
Now you should see a can frame on leader side:cansend can0 01a#01020304
root@host:~# candump -d can0 can0 01A [4] 01 02 03 04
Self-Hosted Backend Server
DNS
If your backend server does not have a public DNS entry, you will need to adjust the /etc/hosts/
file, by appending entries like this (using your server's IP address):
123.456.789.101 opendut.local
123.456.789.101 carl.opendut.local
123.456.789.101 auth.opendut.local
123.456.789.101 netbird.opendut.local
123.456.789.101 netbird-api.opendut.local
123.456.789.101 signal.opendut.local
123.456.789.101 nginx-webdav.opendut.local
123.456.789.101 opentelemetry.opendut.local
Now the following command should complete without errors:
ping carl.opendut.local
Self-Signed Certificate with Unmanaged Setup
If you plan to use the unmanaged setup and your NetBird server uses a self-signed certificate, follow these steps:
-
Create the certificate directory on the OLU:
mkdir -p /usr/local/share/ca-certificates/
-
Copy your NetBird server certificate onto the OLU, for example, by running the following from outside the OLU:
scp certificate.crt root@10.10.4.1:/usr/local/share/ca-certificates/
Ensure that the certificate has a file extension of "crt".
-
Run
update-ca-certificates
on the OLU.
It should output "1 added", if everything works correctly. -
Now the following commands should complete without errors:
curl https://netbird-api.opendut.local
Plugins
You can use plugins to perform additional setup steps. This guide assumes you already have a set of plugins you want to use.
If so, follow these steps:
- Transfer your plugins archive to the target device.
- Unpack your archive in the
plugins/
folder of the unpacked EDGAR distribution. This should result in a directory with one or more.wasm
files and aplugins.txt
file inside. - Write the path to the unpacked directory into the top-level
plugins.txt
file.
This path can be relative to theplugins.txt
file. The order of the paths in theplugins.txt
file determines the order of execution for the plugins.
Troubleshooting
-
In case of issues during the managed setup, see:
less opendut-edgar/setup.log
If the setup completed, but EDGAR does not show up as Healthy in LEA/CLEO, see:
journalctl -u opendut-edgar
For troubleshooting the VPN connection, you may also want to check the NetBird logs:
cat /var/lib/netbird/client.log cat /var/lib/netbird/netbird.err cat /var/lib/netbird/netbird.out
-
Sometimes it might be necessary to restart the EDGAR service:
# Restart service sudo systemctl restart opendut-edgar # Check status systemctl status opendut-edgar
-
It might happen that the NetBird Client started by EDGAR is not able to connect, in that case re-run the EDGAR setup.
-
EDGAR might start with an old IP, different from command
sudo wg
would print. In that particular case stop netbird service and opendut-edgar service and re-run the setup. This might happen to all EDGARs. If this is not enough, and it keeps getting the old IP, it is necessary to set up all devices and clusters from scratch.sudo wg
-
If this error appears:
ERROR opendut_edgar::service::cannelloni_manager: Failure while invoking command line program 'cannelloni': 'No such file or directory (os error 2)'.
Make sure, you've completed the CAN Setup.
User Manual for CLEO
CLEO is a CLI tool to create/read/update/delete resources in CARL.
By using a terminal you will be able to configure your resources via CLEO.
CLEO can currently access the following resources:
- Cluster configurations
- Cluster deployments
- Peers
- Devices (DuTs)
- Container executors
Every resource can be created, listed, described and deleted. Some have additional features such as an option to generate a setup-key or search through them.
In general, CLEO offers a help
command to display usage information about a command. Just use opendut-cleo help
or opendut-cleo <subcommand> --help
.
Setup for CLEO
- Download the opendut-cleo binary for your target from the openDuT GitHub project: https://github.com/eclipse-opendut/opendut/releases
- Unpack the archive on your target system.
- Add a configuration file
/etc/opendut/cleo.toml
(Linux) and configure at least the CARL host+port. The possible configuration values and their defaults can be seen here:
[network]
carl.host = "localhost"
carl.port = 8080
[network.tls]
ca = "/etc/opendut/tls/ca.pem"
domain.name.override = ""
[network.oidc]
enabled = false
[network.oidc.client]
id = "opendut-cleo-client"
issuer.url = "https://keycloak.internal/realms/opendut/"
scopes = "openid,profile,email"
secret = "<tbd>"
Download CLEO from CARL
It is also possible to download CLEO from one of CARLs endpoints. The downloaded file contains the binary for CLEO for the requested architecture, the necessary certificate file, as well as a setup script.
The archive can be requested at https://{CARL-HOST}/api/cleo/{architecture}/download
.
Available architectures are:
- x86_64-unknown-linux-gnu
- armv7-unknown-linux-gnueabihf
- aarch64-unknown-linux-gnu
This might be the go-to way, if you want to use CLEO in your pipeline.
Once downloaded, extract the files with the command tar xvf opendut-cleo-{architecture}.tar.gz
. It will then be extracted into
the folder which is the current work directory. You might want to use another directory of your choice.
Setup via CLEO command (recommended)
A setup string can be retrieved from LEA and used with the following command.
opendut-cleo setup <String> --persistent=<type>
The persistent flag is optional. Without the flag, the needed environment variables will be printed out to the terminal.
If the persistent flag is set to user
or without a value, a configuration file will be written to ~/.config/opendut/cleo/config.toml
,
with it being set to system
the cleo configuration file will be written to /etc/opendut/cleo.toml
.
Setup via script
The script used to run CLEO will not set the environment variables for CLIENT_ID and CLIENT_SECRET. This has to be done by the users manually. This can easily be done by entering the following commands:
export OPENDUT_CLEO_NETWORK_OIDC_CLIENT_ID={{ CLIENT ID VARIABLE }}
export OPENDUT_CLEO_NETWORK_OIDC_CLIENT_SECRET={{ CLIENT SECRET VARIABLE }}
These two variables can be obtained by logging in to Keycloak.
The tarball contains the cleo-cli.sh
shell script. When executed it starts CLEO after setting the
following environment variables:
OPENDUT_CLEO_NETWORK_OIDC_CLIENT_SCOPES
OPENDUT_CLEO_NETWORK_TLS_DOMAIN_NAME_OVERRIDE
OPENDUT_CLEO_NETWORK_TLS_CA
OPENDUT_CLEO_NETWORK_CARL_HOST
OPENDUT_CLEO_NETWORK_CARL_PORT
OPENDUT_CLEO_NETWORK_OIDC_ENABLED
OPENDUT_CLEO_NETWORK_OIDC_CLIENT_ISSUER_URL
SSL_CERT_FILE
SSL_CERT_FILE
is a mandatory environment variable for the current state of the implementation and has the same value as the
OPENDUT_CLEO_NETWORK_TLS_CA
. This might change in the future.
Using CLEO with parameters works by adding the parameters when executing the script, e.g.:
./cleo-cli.sh list peers
TL;DR
- Download archive from
https://{CARL-HOST}/api/cleo/{architecture}/download
- Extract
tar xvf opendut-cleo-{architecture}.tar.gz
- Add two environment variable
export OPENDUT_CLEO_NETWORK_OIDC_CLIENT_ID={{ CLIENT ID VARIABLE }}
andexport OPENDUT_CLEO_NETWORK_OIDC_CLIENT_SECRET={{ CLIENT SECRET VARIABLE }}
- Execute
cleo-cli.sh
with parameters
Additional notes
- The CA certificate to be provided for CLEO depends on the used certificate authority used on server side for CARL.
Auto-Completion
You can use auto-completions in CLEO, which will fill in commands when you press TAB.
To set them up, run opendut-cleo completions SHELL
where you need to replace SHELL
with the shell that you use, e.g. bash
, zsh
or fish
.
Then you need to pipe the output into a completions-file for your shell. See your shell's documentation for where to place these files.
Commands
Listing resources
To list resources you can decide whether to display the resources in a table or in JSON-format.
The default output format is a table which is displayed by not using the --output
flag.
The --output
flag is a global argument, so it can be used at any place in the command.
opendut-cleo list --output=<format> <openDuT-resource>
Creating resources
To create resources it depends on the type of resource whether an ID or connected devices have to be added to the command.
opendut-cleo create <resource>
Applying Configuration Files
To use configuration files, the resource topology can be written in a YAML format which can be applied with the following command:
opendut-cleo apply <FILE_PATH>
The YAML file can look like this:
---
version: v1
kind: PeerDescriptor
metadata:
id: fc4f8da1-1d99-47e1-bbbb-34d0c5bf922a
name: MyPeer
spec:
location: Ulm
network:
interfaces:
- id: 9a182365-47e8-49e3-9b8b-df4455a3a0f8
name: eth0
kind: ethernet
- id: de7d7533-011a-4823-bc51-387a3518166c
name: can0
kind: can
parameters:
bitrate-hz: 250000
sample-point: 0.8
fd: true
data-bitrate-hz: 500000
data-sample-point: 0.8
topology:
devices:
- id: d6cd3021-0d9f-423c-862e-f30b29438cbb
name: ecu1
description: ECU for controlling things.
interface-id: 9a182365-47e8-49e3-9b8b-df4455a3a0f8
tags:
- ecu
- automotive
- id: fc699f09-1d32-48f4-8836-37e0a23cf794
name: restbus-sim1
description: Rest-Bus-Simulation for simulating other ECUs.
interface-id: de7d7533-011a-4823-bc51-387a3518166c
tags:
- simulation
executors:
- id: da6ad5f7-ea45-4a11-aadf-4408bdb69e8e
kind: container
parameters:
engine: podman
name: nmap-scan
image: debian
volumes:
- /etc/
- /opt/
devices:
- ecu1
- restbus-sim1
envs:
- name: VAR_NAME
value: varValue
ports:
- 8080:8080
command: nmap
command-args:
- -A
- -T4
- scanme.nmap.org
---
kind: ClusterConfiguration
version: v1
metadata:
id: f90ffd64-ae3f-4ed4-8867-a48587733352
name: MyCluster
spec:
leader-id: fc4f8da1-1d99-47e1-bbbb-34d0c5bf922a
devices:
- d6cd3021-0d9f-423c-862e-f30b29438cbb
- fc699f09-1d32-48f4-8836-37e0a23cf794
The id
fields contain UUIDs. You can generate a random UUID when newly creating a resource with the opendut-cleo create uuid
command.
Generating PeerSetup Strings
To create a PeerSetup, it is necessary to provide the PeerID of the peer:
opendut-cleo generate-setup-string <PeerID>
Decoding PeerSetup Strings
If you have a peer setup string, and you want to analyze its content, you can use the decode
command.
opendut-cleo decode-setup-string <String>
Describing resources
To describe a resource, the ID of the resource has to be provided. The output can be displayed as text or JSON-format (pretty-json
with line breaks or json
without).
opendut-cleo describe --output=<output format> <resource> --id
Finding resources
You can search for resources by specifying a search criteria string with the find
command. Wildcards such as '*'
are also supported.
opendut-cleo find <resource> "<at least one search criteria>"
Delete resources
Specify the type of resource and its ID you want to delete in CARL.
opendut-cleo delete <resource> --id <ID of resource>
Usage Examples
CAN Example
# CREATE PEER
opendut-cleo create peer --name "$NAME" --location "$NAME"
# CREATE NETWORK INTERFACE
opendut-cleo create network-interface --peer-id "$PEER_ID" --type can --name vcan0
# CREATE DEVICE
opendut-cleo create device --peer-id "$PEER_ID" --name device-"$NAME"-vcan0 --interface vcan0
# CREATE SETUP STRING
opendut-cleo generate-setup-string --id "$PEER_ID"
Ethernet Example
# CREATE PEER
opendut-cleo create peer --name "$NAME" --location "$NAME"
# CREATE NETWORK INTERFACE
opendut-cleo create network-interface --peer-id "$PEER_ID" --type eth --name eth0
# CREATE DEVICE
opendut-cleo create device --peer-id "$PEER_ID" --name device-"$NAME"-eth0 --interface eth0
# CREATE SETUP STRING
opendut-cleo generate-setup-string --id "$PEER_ID"
CLEO and jq
jq is a command line tool to pipe outputs from json into pretty json or extract values. That is how jq can automate cli-applications.
Basic jq
- jq -r removes " from strings.
- [] constructs an array
- {} constructs an object
e.g. jq '[ { "name:" .[].name, "id:" .[].id } ]'
or: jq '[ .[] | { title, name } ]'
input
opendut-cleo list --output=pretty-json peers
output
This output will be exemplary for the following jq commands.
[
{
"name": "HelloPeer",
"id": "90dfc639-4b4a-4bbb-bad3-6f037fcde013",
"status": "Disconnected"
},
{
"name": "Edgar",
"id": "defe10bb-a12a-4ad9-b18e-8149099dd044",
"status": "Connected"
},
{
"name": "SecondPeer",
"id": "c3333d4e-9b1a-4db5-9bfa-7a0a40680f1a",
"status": "Disconnected"
}
]
input
opendut-cleo list --output=json peers | jq '[.[].name]'
output
jq extracts the names of every json element in the list of peers.
[
"HelloPeer",
"Edgar",
"SecondPeer"
]
which can also be put into an array with cleo list --output=json peers | jq '[.[].name']
input
opendut-cleo list --output=json peers | jq '[.[] | select(.status=="Disconnected")]'
output
[
{
"name": "HelloPeer",
"id": "90dfc639-4b4a-4bbb-bad3-6f037fcde013",
"status": "Disconnected"
},
{
"name": "SecondPeer",
"id": "c3333d4e-9b1a-4db5-9bfa-7a0a40680f1a",
"status": "Disconnected"
}
]
input
opendut-cleo list --output=json peers | jq '.[] | select(.status=="Connected") | .id' | xargs -I{} cleo describe peer -i {}
output
Peer: Edgar
Id: defe10bb-a12a-4ad9-b18e-8149099dd044
Devices: [device-1, The Device, Another Device, Fubar Device, Lost Device]
Get the number of the peers
opendut-cleo list --output=json peers | jq 'length'
Sort peers by name
opendut-cleo list --output=json peers | jq 'sort_by(.name)'
Test Execution
In a nutshell, test execution in openDuT works by executing containerized (Docker or Podman) test applications on a peer and uploading the results to a WebDAV directory. Test executors can be configured through either CLEO or LEA.
The container image specified by the image
parameter in the test executor configuration can either be a
container image already present on the peer or an image remotely available, e.g., in the Docker Hub.
A containerized test application is expected to move all test results to be uploaded to the /results/
directory
within its container and create an empty file /results/.results_ready
when all results have been copied there.
When this file exists, or when the container exits and no results have been uploaded yet,
EDGAR creates a ZIP archive from the contents of the /results
directory and uploads it to the WebDAV server
specified by the results-url
parameter in the test executor configuration.
In the testenv
launched by THEO, a WebDAV server is started automatically and can be reached at http://nginx-webdav/
.
In the Local Test Environment,
a WebDAV server is also started automatically and reachable at http://nginx-webdav.opendut.local
.
Note that the execution of executors is only triggered by deploying the cluster.
Test Execution using CLEO
In CLEO, test executors can be configured either by passing all configuration parameters as command line arguments...
$ opendut-cleo create container-executor --help
Create a container executor using command-line arguments
Usage: opendut-cleo create container-executor [OPTIONS] --peer-id <PEER_ID> --engine <ENGINE> --image <IMAGE>
Options:
--peer-id <PEER_ID> ID of the peer to add the container executor to
-e, --engine <ENGINE> Engine [possible values: docker, podman]
-n, --name <NAME> Container name
-i, --image <IMAGE> Container image
-v, --volumes <VOLUMES>... Container volumes
--devices <DEVICES>... Container devices
--envs <ENVS>... Container envs
-p, --ports <PORTS>... Container ports
-c, --command <COMMAND> Container command
-a, --args <ARGS>... Container arguments
-r, --results-url <RESULTS_URL> URL to which results will be uploaded
-h, --help Print help
...or by providing the executor as part of a YAML file via opendut-cleo apply
.
See Applying Configuration Files for more information.
Test Execution Through LEA
In LEA, executors can be configured via the tab Executor
during peer configuration, using similar parameters as for CLEO.
Developer Manual
Learn how to get started, the workflow and tools we use, and what our architecture looks like.
Getting Started
Development Setup
Install the Rust toolchain: https://www.rust-lang.org/tools/install
You may need additional dependencies. On Ubuntu/Debian, these can be installed with:
sudo apt install build-essential pkg-config libssl-dev
To see if your development setup is generally working, you can run cargo ci check
in the project directory.
Mind that this runs the unit tests and additional code checks and may occasionally show warnings/errors related to those, rather than pure build errors.
Tips & Tricks
-
cargo ci
contains many utilities for development in general. -
To view this documentation fully rendered, run
cargo ci doc book open
. -
To have your code validated more extensively, e.g. before publishing your changes, run
cargo ci check
.
Starting Applications
- Run CARL (backend):
cargo carl
You can then open the UI by going to https://localhost:8080/ in your web browser.
-
Run CLEO (CLI for managing CARL):
cargo cleo
-
Run EDGAR (edge software):
cargo edgar service
Mind that this is in a somewhat broken state and may be removed in the future,
as it's normally necessary to add the peer in CARL and then go throughedgar setup
.
For a more realistic environment, see test-environment.
UI Development
Run cargo lea
to continuously build the newest changes in the LEA codebase.
Then you can simply refresh your browser to see them.
Git Workflow
Pull requests
Update Branch
Our goal is to maintain a linear Git history. Therefore, we prefer git rebase
over git merge
1. The same applies when using the GitHub WebUI to update a PR's branch.
-
Update with merge commit:
The first option creates a merge commit to pull in the changes from the PR's target branch and this is against our goal of a linear history, so we do not use this option.
-
Update with rebase:
The second option rebases the changes of the feature branch on top of the PR's target branch. This is the preferred option we use.
Rebase and Merge
As said above, our goal is to maintain a linear Git history. A problem arises when we want to merge pull requests (PR), because the GitHub WebUI offers ineligible options to merge a branch:
-
Create a merge commit:
The first option creates a merge commit to pull in the changes from the PR's branch and this is against our goal of a linear history, so we disabled this option.
-
Squash and merge:
The second option squashes all commits in the PR into a single commit and adds it to the PR's target branch. With this option, it is not possible to keep all commits of the PR separately.
-
Rebase and merge:
The third option rebases the changes of the feature branch on top of the PR's target branch. This would be our preferred option, but "The rebase and merge behavior on GitHub deviates slightly from
git rebase
. Rebase and merge on GitHub will always update the committer information and create new commit SHAs"2. This doubles the number of commits and spams the history unnecessarily. Therefore, we do not use this option either.
The only viable option for us is to rebase and merge the changes via the command line. The procedures slightly differ according to the location of the feature branch.
Feature branch within the same repository
This example illustrates the procedure to merge a feature branch fubar
into a target branch development
.
-
Update the target branch with the latest changes.
git pull --rebase origin development
-
Switch to the feature branch.
git checkout fubar
-
Rebase the changes of the feature branch on top of the target branch.
git rebase development
This is a good moment to run test and validation tasks locally to verify the changes.
-
Switch to the target branch.
git checkout development
-
Merge the changes of the feature branch into the target branch.
git merge --ff-only fubar
The
--ff-only
argument at this point is optional, because we rebased the feature branch and git automatically detects, that a fast-forward is possible. But this flag prevents a merge-commit, if we messed-up one of the previous steps. -
Push the changes.
git push origin development
Feature branch of a fork repository
This example illustrates the procedure to merge a feature branch foo
from a fork bar
of the user doe
into a target branch development
.
-
Update the target branch with the latest changes.
git pull --rebase origin development
-
From the project repository, check out a new branch.
git checkout -b doe-foo development
-
Pull in the changes from the fork.
git pull git@github.com:doe/bar.git foo
-
Rebase the changes of the feature branch on top of the target branch.
git rebase development
This is a good moment to run test and validation tasks locally to verify the changes.
-
Switch to the target branch.
git checkout development
-
Merge the changes of the feature branch into the target branch.
git merge --ff-only fubar
The
--ff-only
argument at this point is optional, because we rebased the feature branch and git automatically detects, that a fast-forward is possible. But this flag prevents a merge-commit if we messed-up one of the previous steps. -
Push the changes.
git push origin development
Except git merge --ff-only
.
Test Environment
openDuT can be tricky to test, as it needs to modify the operating system to function and supports networking in a distributed setup.
To aid in this, we offer a virtualized test environment for development.
This test environment is set up with the help of a command line tool called theo
.
THEO stands for Test Harness Environment Operator.
It is recommended to start everything in a virtual machine, but you may also start the service on the host with docker compose
if applicable.
Setup of the virtual machine is done with Vagrant, Virtualbox and Ansible.
The following services are included in docker:
- carl
- edgar
- firefox container for UI testing (accessible via http://localhost:3000)
- includes certificate authorities and is running in headless mode
- is running in same network as carl and edgar (working DNS resolution!)
- netbird
- keycloak
Operational modes
There are two ways of operation for the test environment:
Test mode
Run everything in Docker (Either on your host or preferable in a virtual machine). You may use the OpenDuT Browser to access the services. The OpenDuT Browser is a web browser running in a docker container in the same network as the other services. All certificates are pre-installed and the browser is running in headless mode. It is accessible from your host via http://localhost:3000.
Development mode
Run CARL on the host in your development environment of choice and the rest in Docker. In this case there is a proxy running in the docker environment. It works as a drop-in replacement for CARL in the docker environment, which is forwarding the traffic to CARL running in an integrated development environment on the host.
Getting started
Set up the virtual machine
Then you may start the test environment in the virtual machine.
- And use it in test mode
- Or use it in development mode.
- If you want to build the project in the virtual machine you may also want to give it more resources (cpu/memory).
There are some known issues with the test environment (most of them on Windows):
Start testing
Once you have set up and started the test environment, you may start testing the services.
User interface
The OpenDuT Browser is a web browser running in a docker container. It is based on KasmVNC base image which allows containerized desktop applications from a web browser. A port forwarding is in place to access the browser from your host. It has all the necessary certificates pre-installed and is running in headless mode. You may use this OpenDuT Browser to access the services.
- Open following address in your browser: http://localhost:3000
- Usernames for test environment:
- LEA: opendut:opendut
- Keycloak: admin:admin123456
- Netbird: netbird:netbird
- Grafana: admin:admin
- Services with user interface:
- https://carl
- https://netbird-dashboard
- https://keycloak
- http://grafana
THEO Setup in Vagrant
You may run all the containers in a virtual machine, using Vagrant.
This is the recommended way to run the test environment.
It will create a private network (subnet 192.168.56.0/24).
The virtual machine itself has the IP address: 192.168.56.10
.
The docker network has the IP subnet: 192.168.32.0/24
.
Make sure those network addresses are not occupied or in conflict with other networks accessible from your machine.
Requirements
-
Install Vagrant
Ubuntu / Debian
sudo apt install vagrant
On most other Linux distributions, the package is called
vagrant
. -
Install VirtualBox (see https://www.virtualbox.org)
sudo apt install virtualbox
-
Create or check if an ssh key pair is present in
~/.ssh/id_rsa
mkdir -p ~/.ssh ssh-keygen -t rsa -b 4096 -C "opendut-vm" -f ~/.ssh/id_rsa
Setup virtual machine
- Either via cargo:
cargo theo vagrant up
- Login to the virtual machine
cargo theo vagrant ssh
Warning Within the VM the rust target directory is overridden to
/home/vagrant/rust-target
to avoid hard linking issues. When running cargo within the VM, output will be placed in this directory!
-
Ensure a distribution of openDuT is present
- By either creating one yourself (on the host)
cargo ci distribution
- Or by copying one to the target directory
target/ci/distribution/x86_64-unknown-linux-gnu/
mkdir -p target/ci/distribution/x86_64-unknown-linux-gnu/
- By either creating one yourself (on the host)
-
Start test environment
cargo theo testenv start
Setup THEO on Windows
This guide will help you set up THEO on Windows.
Requirements
The following instructions use chocolatey to install the required software.
If you don't have chocolatey installed, you can find the installation instructions here.
You may also install the required software manually or e.g. use the Windows Package Manager winget
(Hashicorp.Vagrant, Oracle.VirtualBox, Git.Git).
-
Install vagrant and virtualbox
choco install -y vagrant virtualbox
-
Install git and configure git to respect line endings
choco install git.install --params "'/GitAndUnixToolsOnPath /WindowsTerminal'"
-
Create or check if a ssh key pair is present in
~/.ssh/id_rsa
mkdir -p ~/.ssh ssh-keygen -t rsa -b 4096 -C "opendut-vm" -f ~/.ssh/id_rsa
Info
Vagrant creates a VM which mounts a Windows file share on/vagrant
, where the openDuT repository was cloned. The openDuT project contains bash scripts that would break if the end of line conversion tocrlf
on windows would happen. Therefore a .gitattributes file containing
*.sh text eol=lf
was added to the repository in order to make sure the bash scripts also keep the eol=lf
when cloned on Windows. As an alternative, you may consider using the cloned opendut repo on the Windows host only for the vagrant VM setup part. For working with THEO, you can use the cloned opendut repository inside the Linux guest system instead (/home/vagrant/opendut
).
Setup virtual machine
- Add the following environment variables to point vagrant to the vagrant file
Git Bash:
PowerShell:export OPENDUT_REPO_ROOT=$(git rev-parse --show-toplevel) export VAGRANT_DOTFILE_PATH=$OPENDUT_REPO_ROOT/.vagrant export VAGRANT_VAGRANTFILE=$OPENDUT_REPO_ROOT/.ci/docker/Vagrantfile
$env:OPENDUT_REPO_ROOT=$(git rev-parse --show-toplevel) $env:VAGRANT_DOTFILE_PATH="$env:OPENDUT_REPO_ROOT/.vagrant" $env:VAGRANT_VAGRANTFILE="$env:OPENDUT_REPO_ROOT/.ci/docker/Vagrantfile"
- Set up the vagrant box (following commands were tested in Git Bash and Powershell)
vagrant up
Info
If the virtual machine is not allowed to create or use a private network you may disable it by setting the environment variableOPENDUT_DISABLE_PRIVATE_NETWORK=true
.
- Connect to the virtual machine via ssh (requires the environment variables)
vagrant ssh
Additional notes
You may want to configure a http proxy or a custom certificate authority. Details are in the Advance usage section.
THEO Setup in Docker
Requirements
-
Install Docker
Ubuntu / Debian
sudo apt install docker.io
On most other Linux distributions, the package is called
docker
. -
Install Docker Compose v2
Ubuntu / Debian
sudo apt install docker-compose-v2
Alternatively, see https://docs.docker.com/compose/install/linux/.
-
Add your user into the
docker
group, to be allowed to use Docker commands without root permissions. (Mind that this has security implications.)sudo groupadd docker # create `docker` group, if it does not exist sudo gpasswd --add $USER docker # add your user to the `docker` group newgrp docker # attempt to activate group without re-login
You may need to log out your user account and log back in for this to take effect.
-
Create a distribution of openDuT
cargo ci distribution
- Start containers
cargo theo testenv start
- Start edgar cluster
cargo theo testenv edgar start
Use virtual machine for development
-
Start vagrant on host:
cargo theo vagrant up
-
Connect to virtual machine from host:
cargo theo vagrant ssh
-
Start developer test mode in opendut-vm:
cargo theo dev start
-
Once keycloak and netbird are provisioned, generate run configuration for CARL in opendut-vm:
cargo theo dev carl-config
- which should give an output similar to the following:
OPENDUT_CARL_NETWORK_REMOTE_HOST=carl
OPENDUT_CARL_NETWORK_REMOTE_PORT=443
OPENDUT_CARL_VPN_ENABLED=true
OPENDUT_CARL_VPN_KIND=netbird
OPENDUT_CARL_VPN_NETBIRD_URL=https://192.168.56.10/api
OPENDUT_CARL_VPN_NETBIRD_CA=<ca_certificate_filepath>
OPENDUT_CARL_VPN_NETBIRD_AUTH_SECRET=<dynamic_api_secret>
OPENDUT_CARL_VPN_NETBIRD_AUTH_TYPE=personal-access-token
OPENDUT_CARL_VPN_NETBIRD_AUTH_HEADER=Authorization
- You may also use the toml configuration (also printed from the
carl-config
command) file in a special configuration file on your host at~/.config/opendut/carl/config.toml
. - Use the environment variables in the run configuration for CARL
- Run CARL on the host:
cargo ci carl run
- Run LEA on the host:
cargo ci lea run
- Run CARL on the host:
- Or start CARL in your IDE of choice and add the environment variables to the run configuration.
Use CLEO
When using CLEO in your IDE or generally on the host, the address for keycloak needs to be overridden, as well as the address for CARL.
# Environment variables to use CARL on host
export OPENDUT_CLEO_NETWORK_CARL_HOST=localhost
export OPENDUT_CLEO_NETWORK_CARL_PORT=8080
# Environment variable to use keycloak in test environment
export OPENDUT_CLEO_NETWORK_OIDC_CLIENT_ISSUER_URL=http://localhost:8081/realms/opendut/
cargo ci cleo run -- list peers
Use virtual machine for testing
This mode is used to test a distribution of OpenDuT.
-
Ensure a distribution of openDuT is present
- By either creating one yourself on your host:
cargo ci distribution
- Or in the opendut-vm.
Within the VM the rust target directory is overridden to
/home/vagrant/rust-target
. Therefore, you need the to copy the created distribution to the expected location.cargo ci distribution mkdir -p /vagrant/target/ci/distribution/x86_64-unknown-linux-gnu/ cp ~/rust-target/ci/distribution/x86_64-unknown-linux-gnu/* /vagrant/target/ci/distribution/x86_64-unknown-linux-gnu/
- Or by copying one to the target directory
target/ci/distribution/x86_64-unknown-linux-gnu/
# ensure directory is present mkdir -p target/ci/distribution/x86_64-unknown-linux-gnu/ # copy distribution to target directory
- By either creating one yourself on your host:
-
Login to the virtual machine from your host (assumes you have already set up the virtual machine)
cargo theo vagrant ssh
-
Start test environment in opendut-vm:
cargo theo testenv start
-
Start a cluster in opendut-vm:
cargo theo testenv edgar start
This will start several EDGAR containers and create an OpenDuT cluster.
Known Issues
Copying data to and from the OpenDuT Browser
The OpenDuT Browser is a web browser running in a docker container. It is based on KasmVNC base image which allows containerized desktop applications from a web browser. When using the OpenDuT Browser, you may want to copy data to and from the OpenDuT browser inside your own browser. On Firefox this is restricted, and you may use the clipboard window on the left side of the OpenDuT Browser to copy data to your clipboard.
Cargo Target Directory
When running cargo tasks within the virtual machine, you may see following error:
warning: hard linking files in the incremental compilation cache failed. copying files instead. consider moving the cache directory to a file system which supports hard linking in session dir
This is mitigated by setting a different target directory for cargo in /home/vagrant/.bashrc
on the virtual machine:
export CARGO_TARGET_DIR=$HOME/rust-target
Vagrant Permission Denied
Sometimes vagrant fails to insert the private key that was automatically generated. This might cause this error (seen in git-bash on Windows):
$ vagrant ssh
vagrant@127.0.0.1: Permission denied (publickey).
This can be fixed by overwriting the vagrant-generated key with the one inserted during provisioning:
cp ~/.ssh/id_rsa .vagrant/machines/opendut-vm/virtualbox/private_key
Vagrant Timeout
If the virtual machine is not allowed to create or use a private network it may cause a timeout during booting the virtual machine.
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.
- You may disable the private network by setting the environment variable
OPENDUT_DISABLE_PRIVATE_NETWORK=true
and explicitly halt and start the virtual machine again.
export OPENDUT_DISABLE_PRIVATE_NETWORK=true
vagrant halt
vagrant up
Vagrant Custom Certificate Authority
When running behind an intercepting http proxy, you may run into issues with SSL certificate verification.
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1007)
This can be mitigated by adding the custom certificate authority to the trust store of the virtual machine.
- Place certificate authority file here:
resources/development/tls/custom-ca.crt
- And re-run the provisioning of the virtual machine.
export CUSTOM_ROOT_CA=resources/development/tls/custom-ca.pem
vagrant provision
Ctrl+C in Vagrant SSH
When using cargo theo vagrant ssh
on Windows and pressing Ctrl+C
to terminate a command, the ssh session may be closed.
Netbird management invalid credentials
If keycloak was re-provisioned after the netbird management server, the management server may not be able to authenticate with keycloak anymore.
# docker logs netbird-management-1
[...]
2024-02-14T09:51:57Z WARN management/server/account.go:1174: user 59896d1b-45e6-48bb-ae79-aa17d5a2af94 not found in IDP
2024-02-14T09:51:57Z ERRO management/server/http/middleware/access_control.go:46: failed to get user from claims: failed to get account with token claims user 59896d1b-45e6-48bb-ae79-aa17d5a2af94 not found in the IdP
docker logs edgar-leader
[...]
Failed to create new peer.
[...]
Received status code indicating an error: HTTP status client error (403 Forbidden) for url (http://netbird-management/api/groups)
This may be fixed by destroying the netbird service:
cargo theo testenv destroy --service netbird
Afterward you may restart the netbird service:
cargo theo testenv start
# or
cargo theo dev start
No space left on device
Error writing to file - write (28: No space left on device)
You may try to free up space on the virtual machine by (preferred order):
- Cleaning up the cargo target directory:
cargo clean ls -l $CARGO_TARGET_DIR
- removing old docker images and containers:
docker system prune --all # and eventually with volumes docker system prune --all --volumes
Advanced Usage
Use vagrant directly
Run vagrant commands directly instead of through THEO:
- or directly via Vagrant's cli (bash commands run from the root of the repository):
export OPENDUT_REPO_ROOT=$(git rev-parse --show-toplevel) export VAGRANT_DOTFILE_PATH=$OPENDUT_REPO_ROOT/.vagrant export VAGRANT_VAGRANTFILE=$OPENDUT_REPO_ROOT/.ci/docker/Vagrantfile vagrant up
- provision vagrant with desktop environment
ANSIBLE_SKIP_TAGS="" vagrant provision
Re-provision the virtual machine
This is recommended after potentially breaking changes to the virtual machine.
- Following command will re-run the ansible playbook to re-provision the virtual machine. Run from host:
cargo theo vagrant provision
- Destroy test environment and re-create it, run within the virtual machine:
cargo theo vagrant ssh
cargo theo testenv destroy
cargo theo testenv start
Cross compile THEO for Windows on Linux
cross build --release --target x86_64-pc-windows-gnu --bin opendut-theo
# will place binary here
target/x86_64-pc-windows-gnu/release/opendut-theo.exe
Proxy configuration
In case you are working behind a http proxy, you need additional steps to get the test environment up and running.
The following steps pick up just before you start up the virtual machine with vagrant up
.
A list of all domains used by the test environment is reflected in the proxy shell script:
.ci/docker/vagrant/proxy.sh
.
It is important to note that the proxy address used shall be accessible from the host while provisioning and within
the virtual machine.
If you have a proxy server on your localhost you need to make this in two steps:
-
Use proxy on your localhost
- Configure vagrant to use the proxy localhost.
# proxy configuration script, adjust to your needs source .ci/docker/vagrant/proxy.sh http://localhost:3128
- Install proxy plugin for vagrant
vagrant plugin install vagrant-proxyconf
- Then starting the VM without provisioning it.
This should create the vagrant network interface with network range 192.168.56.0/24.
vagrant up --no-provision
- Configure vagrant to use the proxy localhost.
-
Use proxy on private network address 192.168.56.1
- Make sure this address is allowing access to the internet:
curl --max-time 2 --connect-timeout 1 --proxy http://192.168.56.1:3128 google.de
- Redo the proxy configuration using the address of the host within the virtual machine's private network:
# proxy configuration script, adjust to your needs source .ci/docker/vagrant/proxy.sh http://192.168.56.1:3128
- Reapply the configuration to the VM
$ vagrant up --provision Bringing machine 'opendut-vm' up with 'virtualbox' provider... ==> opendut-vm: Configuring proxy for Apt... ==> opendut-vm: Configuring proxy for Docker... ==> opendut-vm: Configuring proxy environment variables... ==> opendut-vm: Configuring proxy for Git... ==> opendut-vm: Machine not provisioned because `--no-provision` is specified.
- Make sure this address is allowing access to the internet:
-
Unset all proxy configuration for testing purposes (non-permanent setting in the shell)
unset http_proxy https_proxy no_proxy HTTP_PROXY HTTPS_PROXY NO_PROXY
-
You may also set the docker proxy configuration in your environment manually:
~/.docker/config.json
{ "proxies": { "default": { "httpProxy": "http://x.x.x.x:3128", "httpsProxy": "http://x.x.x.x:3128", "noProxy": "localhost,127.0.0.1,netbird-management,netbird-dashboard,netbird-signal,netbird-coturn,keycloak,edgar-leader,edgar-*,carl,192.168.0.0/16" } } }
/etc/docker/daemon.json
{ "proxies": { "http-proxy": "http://x.x.x.x:3128", "https-proxy": "http://x.x.x.x:3128", "no-proxy": "localhost,127.0.0.1,netbird-management,netbird-dashboard,netbird-signal,netbird-coturn,keycloak,edgar-leader,edgar-*,carl,192.168.0.0/16" } }
Custom root certificate authority
This section shall provide information on how to
provision the virtual machine when running behind an intercepting http proxy.
This is also used in the docker containers to trust the custom certificate authority.
All certificate authorities matching the following path will be trusted in the docker container:
./resources/development/tls/*-ca.pem
.
The following steps need to be done before provisioning the virtual machine.
- Place certificate authority file here:
resources/development/tls/custom-ca.crt
- Optionally, disable private network definition of vagrant, if this causes errors.
export CUSTOM_ROOT_CA=resources/development/tls/custom-ca.pem
export OPENDUT_DISABLE_PRIVATE_NETWORK=true # optional
vagrant provision
Give the virtual machine more CPU cores and more memory
In case you want to build the project you may want to assign more CPU cores, more memory or more disk to your virtual machine.
Just add the following environment variables to the .env
file and reboot the virtual machine.
- Configure more memory and/or CPUs:
OPENDUT_VM_MEMORY=32768 OPENDUT_VM_CPUS=8 cargo theo vagrant halt cargo theo vagrant up
- Configure more disk space:
- Most of the time you may want to clean up the cargo target directory inside the
opendut-vm
if you run out of disk space:
cargo clean # should clean out target directory in ~/rust-target
- If this is still not enough you can install the vagrant disk size plugin
vagrant plugin install vagrant-disksize
- add the following environment variable:
OPENDUT_VM_DISK_SIZE=80
- and reboot the virtual machine to have more disk space unlocked.
- Most of the time you may want to clean up the cargo target directory inside the
Secrets for test environment
This repository contains secrets for testing purposes. These secrets are not supposed to be used in a production environment. There are two formats defined in the repository that document their location:
- ~/.gitguardian.yml
- .secretscanner-false-positives.json
Alternative strategy to avoid this: auto-generate secrets during test environment setup.
GitGuardian
Getting started with ggshield
- Install ggshield
sudo apt install -y python3-pip pip install ggshield export PATH=~/.local/bin/:$PATH
- Login to https://dashboard.gitguardian.com
- Either use PAT or service account (https://docs.gitguardian.com/api-docs/service-accounts)
- Goto API -> Personal access tokens
- and create a token
- Use API token to login:
ggshield auth login --method token
Scan repository
-
See https://docs.gitguardian.com/ggshield-docs/getting-started
-
Scan repo
ggshield secret scan repo ./
-
Ignore secrets found in last run and remove them or document them in
.gitguardian.yml
ggshield secret ignore --last-found
-
Review changes in
.gitguardian.yml
and commit
Release
Learn how to create releases of openDuT.
Publishing a release
This is a checklist for the steps to take to create a release for public usage.
- Ensure the changelog is up-to-date.
- Change top-most changelog heading from "Unreleased" to the new version number.
-
Increment version number in workspace
Cargo.toml
. -
Run
cargo ci check
to update allCargo.lock
files. -
Increment the version of the CARL container used in CI/CD deployments (in the
.ci/
folder). -
Create commit and push to
development
. -
Open PR from
development
tomain
. - Merge PR once its checks have succeeded.
-
Tag the last commit on
main
with the respective version number in the format "v1.2.3" and push the tag.
After the release
-
Increment version number in workspace
Cargo.toml
to development version, e.g. "1.2.3-alpha". -
Run
cargo ci check
to update allCargo.lock
files. - Add a new heading "Unreleased" to the changelog with contents "tbd.".
-
Create commit and push to
development
.
Manually Building a Release
To build release artifacts for distribution, run:
cargo ci distribution
The artifacts are placed under target/ci/distribution/
.
To build a docker container of CARL and push it to the configured docker registry:
cargo ci carl docker --publish
This will publish opendut-carl to ghcr.io/eclipse-opendut/opendut-carl:x.y.z
.
The version defined in opendut-carl/Cargo.toml
is used as docker tag by default.
Alternative platform
If you want to build artifacts for a different platform, use the following:
cargo ci distribution --target armv7-unknown-linux-gnueabihf
The currently supported target platforms are:
- x86_64-unknown-linux-gnu
- armv7-unknown-linux-gnueabihf
- aarch64-unknown-linux-gnu
Alternative docker registry
Publish docker container to another container registry than ghcr.io
.
export OPENDUT_DOCKER_IMAGE_HOST=other-registry.example.net
export OPENDUT_DOCKER_IMAGE_NAMESPACE=opendut
cargo ci carl docker --publish --tag 0.1.1
This will publish opendut-carl to 'other-registry.example.net/opendut:opendut-carl:0.1.1'.
Overview
Components
- CARL (Control And Registration Logic)
- EDGAR (Edge Device Global Access Router)
- LEA (Leasing ECU Access)
- CLEO (Command-Line ECU Orchestrator)
- DUT (Device under test)
Functional description
openDuT provisions an end-to-end encrypted private network between Devices under Test (DuT), Test Execution Engines, RestBus simulations, and other devices. To achieve this, openDuT uses Edge Device Global Access Router (EDGAR), which can tunnel the Ethernet traffic (Layer 2) of the connected devices into the openDuT network using Generic Routing Encapsulation (GRE). CAN traffic is tunnelled between EDGAR instances using cannelloni. EDGAR registers with the Control and Registration Logic (CARL) and reports the type and status of its connected devices. Multiple EDGARs can be linked to clusters via the graphical Leasing ECU Access (LEA) UI or the Command-Line ECU Orchestrator (CLEO) of CARL, and the openDuT cluster can be provisioned for the user.
openDuT uses NetBird technology and provides its own NetBird server, including a TURN server in CARL and NetBird clients in the EDGARs. The NetBird clients of the clustered EDGARs automatically build a WireGuard network in star topology. If a direct connection between two EDGARs is not possible, the tunnel is routed through the TURN server in CARL.
Within EDGAR, the openDUT ETH Bridge manages Ethernet communication and routes outgoing packets to the GRE-Bridge(s). The GRE-Bridges encapsulate the packets and send them over fixed-assigned sources to fixed-assigned targets. When encapsulating, GRE writes the source and header information and the protocol type of the data packet into the GRE header of the packet. This offers the following advantages: different protocol types can be sent, network participants can be in the same subnet, and multiple VLANs can be transmitted through a single WireGuard tunnel.
CAN interfaces on EDGAR are connected by means of the openDUT CAN Bridge, which is effectively a virtual CAN interface connected to the individual interfaces by means of can-gw
rules. Between the leading EDGAR and each other EDGAR, a cannelloni tunnel is established, linking the CAN bridges of different EDGAR instances together.
CARL
Overview
ResourcesManager
Database Schema
Peer
PeerState
When an EDGAR manages multiple devices, these can only be configured into one deployed Cluster.
It is not possible to use some of these devices in one deployed Cluster and some in another.
This is due to the current architecture only supporting one EDGAR per peer computer (the machine placed next to your ECU),
and only one Cluster being allowed to be deployed per EDGAR, as this simplifies management of the VPN client and network interfaces considerably.
Theoretically, it is possible to deploy two EDGARs onto a peer computer by isolating them via containers, or to use two peer computers for one ECU.
Cluster
ClusterState
Cluster
Cluster Creation
message ClusterConfiguration {
ClusterId id = 1;
ClusterName name = 2;
opendut.types.peer.PeerId leader = 3;
repeated opendut.types.topology.DeviceId devices = 4;
}
Cluster Deployment
message ClusterAssignment {
ClusterId id = 1;
opendut.types.peer.PeerId leader = 3;
repeated PeerClusterAssignment assignments = 4;
}
message PeerClusterAssignment {
opendut.types.peer.PeerId peer_id = 1;
opendut.types.util.IpAddress vpn_address = 2;
opendut.types.util.Port can_server_port = 3;
repeated opendut.types.util.NetworkInterfaceDescriptor device_interfaces = 4;
}
EDGAR
Setup
Service
Deployment
Telemetry
Changelog
Notable changes to this project are documented in this file.
The format is based on Keep a Changelog,
and this project adheres to Semantic Versioning.
0.4.0
Added
-
CLEO now comes with a new subcommand
opendut-cleo apply
. You may load cluster and peer configurations from a YAML file, similar to how resources are loaded withkubectl apply
in Kubernetes. For more information, see Applying Configuration Files. -
A monitoring dashboard is now available in the deployment environment at
https://monitoring.opendut.local
.
Fixed
-
A major upgrade of the networking libraries has been completed.
This affects HTTP and gRPC, server- and client-side usage, as well as the OpenTelemetry monitoring. -
CARL does not anymore send duplicate Executor and Ethernet bridge name configurations to EDGAR when re-deploying a cluster.
This may have caused EDGAR to repeatedly delete and recreate these.
0.3.1
Fixed
- Restarting EDGAR while a cluster is deployed doesn't lead to an invalid state anymore.
- CARL doesn't forget about Ethernet bridges and executors anymore, when sending the configuration to a reconnecting EDGAR.
- EDGAR Setup now loads plugins correctly.
0.3.0
Breaking Changes
* The API for listing peers on the PeerMessagingBroker has been removed.Added
- CARL can now persist its state into a database.
- EDGAR Setup now has support for plugins, which can perform hardware- or use-case specific setup tasks.
Changed
- EDGAR Setup now prompts whether to overwrite a mismatched configuration, when used interactively.
- The NetBird server and client was updated to 0.28.9.
Fixed
- EDGAR Service does not require root permissions anymore, if CAN is not used.
Known Issues
- Stopping an EDGAR that has a cluster deployed, does not undeploy the cluster, therefore blocking other EDGARs in the cluster.
0.2.0
Breaking Changes
CARL API
- The API for listing peers on the PeerMessagingBroker is now marked as deprecated.
Operations
- An additional configuration value needs to be passed to CARL. You can do so, for example, via environment variable:
OPENDUT_CARL_NETWORK_OIDC_CLIENT_ISSUER_ADMIN_URL=https://keycloak/admin/realms/opendut/
The value has to be your Keycloak's Admin URL.
- The environment variable for the Keycloak database's password was renamed from
POSTGRES_PASSWORD
toKEYCLOAK_POSTGRES_PASSWORD
. - An additional password environment variable needs to be provided called
CARL_POSTGRES_PASSWORD
.
Added
- CARL can now require clients to be authenticated.
- A download button for CLEO and EDGAR has been added in the LEA web-UI.
- LEA and CLEO show when a peer or device is already used in a cluster.
- You can now configure frequently used CAN parameters in LEA and CLEO.
- Setup-Strings can now be copied to the clipboard in LEA.
Changed
- The health of Clusters marked as deployed is now displayed as yellow in LEA.
This is to reflect that determining the actual cluster state is not yet implemented. - It's not anymore possible to configure deployment of a peer into two clusters.
This was never supported to begin with, but the UIs didn't prevent it. - Various quality-of-life improvements.
Fixed
- Generating a Setup-String now works for peers which had previously been set up.
0.1.0
Added
- Client credentials added to peer setup
Development
Test environment
Notable changes to the test environment are documented in this section. Changes to the test environment may require re-provisioning the virtual machine.
Added
- New administrative privileges for keycloak client opendut-carl-client
- Added linux-generic package to opendut-vm (keeps vcan module up-to-date when kernel is updated)