Traditionally, the automotive industry was and still is centered around vehicle hardware and the corresponding hardware development and life-cycle management. Software, however, is gaining more and more importance in vehicle development and over the entire vehicle lifetime. Thus, the vehicle and its value to the customer is increasingly defined by software. This transition towards what are termed as software-defined vehicles changes the way in which we innovate, code, deliver and work together. It is a change across the whole mobility value chain and life-cycle: from development and production to delivery and operations of the vehicle.
This is the multi-page printable view of this section. Click here to print.
Velocitas
- 1: About Velocitas
- 1.1: Use Cases
- 1.1.1: Seat Adjuster
- 1.1.2: Dog Mode
- 1.2: Development Model
- 1.2.1: Vehicle App SDK
- 1.2.2: Vehicle Abstraction Layer (VAL)
- 1.3: Deployment Model
- 1.3.1: Build and Release Process
- 1.4: Repository Overview
- 2: Tutorials
- 2.1: Quickstart
- 2.1.1: Import examples
- 2.1.2: Working behind proxy
- 2.2: Prototyping Integration
- 2.2.1: Service Integration
- 2.3: Vehicle App Development
- 2.3.1: Python Vehicle App Development
- 2.3.2: C++ Vehicle App Development
- 2.4: Vehicle Model Creation
- 2.4.1: C++ Manual Vehicle Model Creation
- 2.4.2: Python Manual Vehicle Model Creation
- 2.4.3: Vehicle Model Distribution
- 2.4.3.1: C++ Vehicle Model Distribution
- 2.4.3.2: Python Vehicle Model Distribution
- 2.5: Run Vehicle App Runtime Services
- 2.6: Vehicle App Integration Testing
- 2.7: Vehicle App Deployment via PodSpecs
- 2.8: Vehicle App Deployment with Helm
- 3: Contribution Guidelines
1 - About Velocitas
Eclipse Velocitas™ provides an end-to-end, scalable and modular development tool chain to create containerized in-vehicle applications (Vehicle Apps), offering a comfortable, fast and efficient development experience to increase the speed of a development team.

What does Velocitas offer?
- Predefined CI/CD workflows that build (for multi architectures), test, document and deploy a Vehicle App help saving setup time
- DevContainer helps to install everything to start the local development immediately in Microsoft Visual Studio Code
- Vehicle API to abstract the vehicle’s signals and E/E architecture helps to focus on business logic and enables Vehicle Apps to be portable across different electric and electronic vehicle architectures
- Vehicle Apps skeleton and example Vehicle Apps helps to understand easily how to write a Vehicle Apps using the KUKSA VAL runtime services
- Higher development velocity through self-contained, containerized Apps with no dependencies to E/E architecture
Concepts
1.1 - Use Cases
Velocitas offers a scalable and modular development toolchain for creating containerized Vehicle Applications that offers a easy to use, fast and efficient development experience to increase the velocity of your development team.
This Vehicle Apps are implemented on top of a Vehicle Model (which is generated from the underlying semantic models like VSS for a concrete programming language) and its underlying language-specific SDK to provide headless comfort functions or connected application functions like Seat Adjuster, Dog Mode, Trunk Delivery or Data Logging & triggering.
Examples
1.1.1 - Seat Adjuster
Imagine a car sharing company that wants to offer its customers the functionality that the driver seat automatically moves to the right position, when the driver enters the rented car. The car sharing company knows the driver and has stored the preferred seat position of the driver in its driver profile. The car gets unlocked by the driver and a request for the preferred seat position of the driver will be sent to the vehicle. That’s where your implementation starts.
The Seat Adjuster Vehicle App receives the seat position as an MQTT message and triggers a seat adjustment command of the Seat Service that changes the seat position. Of course, the driver of a rented car would like the position, that he may have set himself, to be saved by the car sharing company and used for the next trip. As a result, the Seat Adjuster Vehicle App subscribes to the seat position and receives the new seat position from the Data Broker that streams the data from the Seat Service.
Requesting new seat position
- The Customer requests the change of the seat position as MQTT message on the topic
seatadjuster/setPosition/request
with the payload:{"requestId": "xyz", "position": 300}
- The Seat Adjuster Vehicle App that has subscribed to this topic, receives the request to change the seat position as a MQTT message.
- The Seat Adjuster Vehicle App gets the current vehicle speed from the data broker, which is fed by the CAN Feeder (KUKSA DBC Feeder).
- With the support of the Vehicle App SDK, the Seat Adjuster Vehicle App triggers a seat adjustment command of the Seat Service via gRPC in the event that the speed is equal to zero. Hint: This is a helpful convenience check but not a safety check.
- The Seat Service moves the seat to the new position via CAN messages.
- The Seat Service returns OK or an error code as grpc status to the Seat Adjuster Vehicle App.
- If everything went well, the Seat Adjuster Vehicle App returns a success message for the topic
seatadjuster/setPosition/response
with the payload:Otherwise, an error message will be returned:{"requestId": "xyz", "status": 0 }
{"requestId": "xyz", "status": 1, "message": "<error message>" }
- This success or error message will be returned to the Customer.
Publishing current seat position
- If the seat position will be changed by the driver, the new seat position will be sent to the Seat Service via CAN.
- The Seat Service streams the seat position via gRPC to the KUKSA Data Broker since it was registered beforehand.
- The Seat Adjuster Vehicle App that subscribed to the seat position receives the new seat position from the KUKSA Data Broker as a result.
- The Seat Adjuster Vehicle App publishes this on topic
seatadjuster/currentPosition
with the payload:{"position": 350}
- The Customer who has subscribed to this topic retrieves the new seat position and can store this position to use it for the next trip.
Example Code
You can find an example implementation of a Seat Adjuster vehicle application here: Seat Adjuster
1.1.2 - Dog Mode
The Dog Mode Vehicle App consists of the following key features:
- Request the vehicle’s Heating, Ventilation, and Air Conditioning (HVAC) service to turn the Air Conditioning (AC) ON/OFF
- The driver can adjust the temperature for a specific degree
- The Vehicle App observe the current temperature and the battery’s state of charge and react accordingly
- The driver/owner will be notified whenever the state of the charge drops below a certain value
Example Code
You can find an example implementation of a dog mode vehicle application here: Dog Mode
1.2 - Development Model
The Velocitas development model is centered around what are known as Vehicle Apps. Automation allows engineers to make high-impact changes frequently and deploy Vehicle Apps through cloud backends as over-the-air updates. The Vehicle App development model is about speed and agility paired with state-of-the-art software quality.
Development Architecture
Velocitas provides a flexible development architecture for Vehicle Apps. The following diagram shows the major components of the Velocitas stack.
Vehicle Apps
The Vehicle Applications (aka. Vehicle Apps) contain the business logic that needs to be executed on a vehicle. A Vehicle App is implemented on top of a Vehicle Model and its underlying language-specific SDK. Many concepts of cloud-native and twelve-factor applications apply to Vehicle Apps as well and are summarized in the next chapter.
Vehicle Models
A Vehicle Model makes it possible to easily get vehicle data from the Data Broker and to execute remote procedure calls over gRPC against Vehicle Services and other Vehicle Apps. It is generated from the underlying semantic models for a concrete programming language as a graph-based, strongly-typed, intellisense-enabled library. The elements of the vehicle models are defined by the SDKs.
SDKs
To reduce the effort required to implement Vehicle Apps, Velocitas provides a set of SDKs for different programming languages. SDKs are available for Python and C++, further SDKs for Rust and C are planned. Next to a Vehicle Apps abstraction, the SDKs are Middleware-enabled, provide connectivity to the Data Broker and contain the ontology in the form of base classes to create Vehicle Models.
Vehicle Services
Vehicle Services provide service interfaces to control actuators or to trigger (complex) actions. E.g. they communicate with the vehicle internals networks like CAN or Ethernet, which are connected to actuators, electronic control units (ECUs) and other vehicle computers (VCs). They may provide a simulation mode to run without a network interface. Vehicle services may feed data to the Data Broker and may expose gRPC endpoints, which can be invoked by Vehicle Apps over a Vehicle Model
Data Broker
Vehicle data is stored in the KUKSA Data Broker conforming to an underlying Semantic Model like VSS. Vehicle Apps can either pull this data or subscribe for updates. In addition, it supports rule-based access to reduce the number of updates sent to the Vehicle App.
Semantic models
The Vehicle Signal Specification (VSS) provides a domain taxonomy for vehicle signals and defines the vehicle data semantically, which is exchanged between Vehicle Apps and the Data Broker.
The Vehicle Service Catalog (VSC) extends VSS with functional remote procedure call definitions and semantically defines the gRPC interfaces of Vehicle Services and Vehicle Apps.
As an alternative to VSS and VSC, vehicle data and services can be defined semantically in a general IoT modelling language like Digital Twin Definition Language (DTDL) or BAMM Aspect Meta Model BAMM as well.
The Velocitas SDK is using (VSS) as the sementic model for the Vehicle Model.
Communication Protocols
Asynchronous communication between Vehicle Apps and other vehicle components, as well as cloud connectivity, is facilitated through MQTT messaging. Direct, synchronous communication between Vehicle Apps, Vehicle Services and the Data Broker is based on the gRPC protocol.
Middleware
Velocitas leverages dapr for gRPC service discovery, Open Telemetry tracing and the publish/subscribe building block which provides an abstraction of the MQTT messaging protocol.
Vehicle Edge Operating System
Vehicle Apps are expected to run on a Linux-based operating system. An OCI-compliant container runtime is required to host the Vehicle App containers and the dapr middleware mandates a Kubernetes control plane. For publish/subscribe messaging a MQTT broker must be available (e.g., Eclipse Mosquitto).
Vehicle App Characteristics
The following aspects are important characteristics for Vehicle Apps:
-
Code base: Every Vehicle App is stored in its own repository. Tracked by version control, it can be deployed to multiple environments.
-
Polyglot: Vehicle Apps can be written in any programming language. System-level programming languages like Rust and C/C++ are particularly relevant for limited hardware resources found in vehicles, but higher-level languages like Python and JavaScript are also considered for special use cases.
-
OCI-compliant containers: Vehicle Apps are deployed as OCI-compliant containers. The size of these containers should be minimal to fit on constrained devices.
-
Isolation: Each Vehicle App should execute in its own process and should be self-contained with its interfaces and functionality exposed on its own port.
-
Configurations: Configuration information is separated from the code base of the Vehicle App, so that the same deployment can propagate across environments with their respective configuration applied.
-
Disposability: Favor fast startup and support graceful shutdowns to leave the system in a correct state.
-
Observability: Vehicle Apps provide traces, metrics and logs of every part of the application using Open Telemetry.
-
Over-the-air updatability: Vehicle Apps are released to cloud backends like the Bosch Mobility Cloud and can be updated in vehicles frequently over the air.
Development Process
The starting point for developing Vehicle Apps is a Semantic Model of the vehicle data and vehicle services. Based on the Semantic Model, language-specific Vehicle Models are generated. Vehicle Models are then distributed as packages to the respective package manager of the chosen programming language (e.g. pip, cargo, npm, …).
After a Vehicle Model is available for the chosen programming language, the Vehicle App can be developed using the generated Vehicle Model and its SDK.
Further information
1.2.1 - Vehicle App SDK
Introduction
The Vehicle App SDK consists of the following building blocks:
-
Vehicle Model Ontology: The SDK provides a set of model base classes for the creation of vehicle models.
-
Middleware integration: Vehicle Models can contain gRPC stubs to communicate with Vehicle Services. gRPC communication is integrated with the Dapr middleware for service discovery and OpenTelemetry tracing.
-
Fluent query & rule construction: Based on a concrete Vehicle Model, the SDK is able to generate queries and rules against the KUKSA Data Broker to access the real values of the data points that are defined in the vehicle model.
-
Publish & subscribe messaging: The SDK supports publishing messages to a MQTT broker and subscribing to topics of a MQTT broker.
-
Vehicle App abstraction: Last but not least the SDK provides a Vehicle App base class, which every Vehicle App derives from.
An overview of the Vehicle App SDK and its dependencies is depicted in the following diagram:
Vehicle Model Ontology
The Vehicle Model is a tree-based model where every branch in the tree, including the root, is derived from the Model base class.
The Vehicle Model Ontology consists of the following classes:
Model
A model contains services, data points and other models. It corresponds to branch entries in VSS or interfaces in DTDL or namespaces in VSC.
ModelCollection
Info
The ModelCollection is deprecated since SDK v0.4.0. The generated vehicle model must reflect the actual representation of the data points. Please use the Model base class instead.Specifications like VSS support a concept that is called Instances. It makes it possible to describe repeating definitions. In DTDL, such kind of structures may be modeled with Relationships. In the SDK, these structures are mapped with the ModelCollection
class. A ModelCollection
is a collection of models, which make it possible to reference an individual model either by a NamedRange
(e.g., Row [1-3]), a Dictionary
(e.g., “Left”, “Right”) or a combination of both.
Service
Direct asynchronous communication between Vehicle Apps and Vehicle Services is facilitated via the gRPC protocol.
The SDK has its own Service
base class, which provides a convenience API layer to access the exposed methods of exactly one gRPC endpoint of a Vehicle Service or another Vehicle App. Please see the Middleware Integration section for more details.
DataPoint
DataPoint
is the base class for all data points. It corresponds to sensors/actuators in VSS or telemetry / properties in DTDL.
Data Points are the signals that are typically emitted by Vehicle Services.
The representation of a data point is a path starting with the root model, e.g.:
Vehicle.Speed
Vehicle.FuelLevel
Vehicle.Cabin.Seat.Row1.Pos1.Position
Data points are defined as attributes of the model classes. The attribute name is the name of the data point without its path.
Typed DataPoint classes
Every primitive datatype has a corresponding typed data point class, which is derived from DataPoint
(e.g., DataPointInt32
, DataPointFloat
, DataPointBool
, DataPointString
, etc.).
Example
An example of a Vehicle Model created with the described ontology is shown below:
# import ontology classes
from sdv import (
DataPointDouble,
Model,
Service,
DataPointInt32,
DataPointBool,
DataPointArray,
DataPointString,
)
class Seat(Model):
def __init__(self, name, parent):
super().__init__(parent)
self.name = name
self.Position = DataPointBool("Position", self)
self.IsOccupied = DataPointBool("IsOccupied", self)
self.IsBelted = DataPointBool("IsBelted", self)
self.Height = DataPointInt32("Height", self)
self.Recline = DataPointInt32("Recline", self)
class Cabin(Model):
def __init__(self, name, parent):
super().__init__(parent)
self.name = name
self.DriverPosition = DataPointInt32("DriverPosition", self)
self.Seat = SeatCollection("Seat", self)
class SeatCollection(Model):
def __init__(self, name, parent):
super().__init__(parent)
self.name = name
self.Row1 = self.RowType("Row1", self)
self.Row2 = self.RowType("Row2", self)
def Row(self, index: int):
if index < 1 or index > 2:
raise IndexError(f"Index {index} is out of range")
_options = {
1 : self.Row1,
2 : self.Row2,
}
return _options.get(index)
class RowType(Model):
def __init__(self, name, parent):
super().__init__(parent)
self.name = name
self.Pos1 = Seat("Pos1", self)
self.Pos2 = Seat("Pos2", self)
self.Pos3 = Seat("Pos3", self)
def Pos(self, index: int):
if index < 1 or index > 3:
raise IndexError(f"Index {index} is out of range")
_options = {
1 : self.Pos1,
2 : self.Pos2,
3 : self.Pos3,
}
return _options.get(index)
class VehicleIdentification(Model):
def __init__(self, name, parent):
super().__init__(parent)
self.name = name
self.VIN = DataPointString("VIN", self)
self.Model = DataPointString("Model", self)
class CurrentLocation(Model):
def __init__(self, name, parent):
super().__init__(parent)
self.name = name
self.Latitude = DataPointDouble("Latitude", self)
self.Longitude = DataPointDouble("Longitude", self)
self.Timestamp = DataPointString("Timestamp", self)
self.Altitude = DataPointDouble("Altitude", self)
class Vehicle(Model):
def __init__(self, name, parent):
super().__init__(parent)
self.name = name
self.Speed = DataPointFloat("Speed", self)
self.CurrentLocation = CurrentLocation("CurrentLocation", self)
self.Cabin = Cabin("Cabin", self)
vehicle = Vehicle("Vehicle")
# include "sdk/DataPoint.h"
# include "sdk/Model.h"
using namespace velocitas;
class Seat : public Model {
public:
Seat(std::string name, Model* parent)
: Model(name, parent) {}
DataPointBoolean Position{"Position", this};
DataPointBoolean IsOccupied{"IsOccupied", this};
DataPointBoolean IsBelted{"IsBelted", this};
DataPointInt32 Height{"Height", this};
DataPointInt32 Recline{"Recline", this};
};
class CurrentLocation : public Model {
public:
CurrentLocation(Model* parent)
: Model("CurrentLocation", parent) {}
DataPointDouble Latitude{"Latitude", this};
DataPointDouble Longitude{"Longitude", this};
DataPointString Timestamp{"Timestamp", this};
DataPointDouble Altitude{"Altitude", this};
};
class Cabin : public Model {
public:
class SeatCollection : public Model {
public:
class RowType : public Model {
public:
using Model::Model;
Seat Pos1{"Pos1", this};
Seat Pos2{"Pos2", this};
};
SeatCollection(Model* parent)
: Model("Seat", parent) {}
RowType Row1{"Row1", this};
RowType Row2{"Row2", this};
};
Cabin(Model* parent)
: Model("Cabin", parent) {}
DataPointInt32 DriverPosition{"DriverPosition", this};
SeatCollection Seat{this};
};
class Vehicle : public Model {
public:
Vehicle()
: Model("Vehicle") {}
DataPointFloat Speed{"Speed", this};
::CurrentLocation CurrentLocation{this};
::Cabin Cabin{this};
};
Middleware integration
gRPC Services
Vehicle Services are expected to expose their public endpoints over the gRPC protocol. The related protobuf definitions are used to generate method stubs for the Vehicle Model to make it possible to call the methods of the Vehicle Services.
Model integration
Based on the .proto
files of the Vehicle Services, the protocol buffers compiler generates descriptors for all rpcs, messages, fields etc for the target language.
The gRPC stubs are wrapped by a convenience layer class derived from Service
that contains all the methods of the underlying protocol buffer specification.
Info
The convencience layer of C++ is abit more extensive than in Python. The complexity of gRPC’s async API is hidden behind individualAsyncGrpcFacade
implementations which need to be implemented manually. Have a look at the SeatAdjusterApp
example’s SeatService
and its SeatServiceAsyncGrpcFacade
.
class SeatService(Service):
def __init__(self):
super().__init__()
self._stub = SeatsStub(self.channel)
async def Move(self, seat: Seat):
response = await self._stub.Move(
MoveRequest(seat=seat), metadata=self.metadata
)
return response
class SeatService : public Service {
public:
// nested classes/structs omitted
SeatService(Model* parent)
: Service("SeatService", parent)
, m_asyncGrpcFacade(grpc::CreateChannel("localhost:50051", grpc::InsecureChannelCredentials()))
{
}
AsyncResultPtr_t<VoidResult> move(Seat seat)
{
auto asyncResult = std::make_shared<AsyncResult<VoidResult>>();
m_asyncGrpcFacade->Move(
toGrpcSeat(seat),
[asyncResult](const auto& reply){ asyncResult->insertResult(VoidResult{})}),
[asyncResult](const auto& status){ asyncResult->insertError(toInternalStatus(status))};
return asyncResult;
}
private:
std::shared_ptr<SeatServiceAsyncGrpcFacade> m_asyncGrpcFacade;
};
Service discovery
The underlying gRPC channel is provided and managed by the Service
base class of the SDK. It is also responsible for routing the method invocation to the service through dapr middleware. As a result, a dapr-app-id
has to be assigned to every Service
, so that dapr can discover the corresponding vehicle services. This dapr-app-id
has to be specified as an environment variable named <service_name>_DAPR_APP_ID
.
Fluent query & rule construction
A set of query methods like get()
, where()
, join()
etc. are provided through the Model
and DataPoint
base classes. These functions make it possible to construct SQL-like queries and subscriptions in a fluent language, which are then transmitted through the gRPC interface to the KUKSA Data Broker.
Query examples
The following examples show you how to query data points.
Get single datapoint
driver_pos: int = vehicle.Cabin.DriverPosition.get()
# Call to broker
# GetDataPoint(rule="SELECT Vehicle.Cabin.DriverPosition")
auto driverPos = getDataPoints({Vehicle.Cabin.DriverPosition})->await();
// Call to broker:
// GetDataPoint(rule="SELECT Vehicle.Cabin.DriverPosition")
Get datapoints from multiple branches
vehicle_data = vehicle.CurrentLocation.Latitude.join(
vehicle.CurrentLocation.Longitude).get()
print(f'
Latitude: {vehicle_data.CurrentLocation.Latitude}
Longitude: {vehicle_data.CurrentLocation.Longitude}
')
# Call to broker
# GetDataPoint(rule="SELECT Vehicle.CurrentLocation.Latitude, CurrentLocation.Longitude")
auto datapoints =
getDataPoints({Vehicle.CurrentLocation.Latitude, Vehicle.CurrentLocation.Longitude})->await();
// Call to broker:
// GetDataPoint(rule="SELECT Vehicle.CurrentLocation.Latitude, CurrentLocation.Longitude")
Subscription examples
Subscribe and Unsubscribe to a single datapoint
self.rule = (
await self.vehicle.Cabin.Seat.Row(2).Pos(1).Position
.subscribe(self.on_seat_position_change)
)
def on_seat_position_change(self, data: DataPointReply):
position = data.get(self.vehicle.Cabin.Seat.Row2.Pos1.Position).value
print(f'Seat position changed to {position}')
# Call to broker
# Subscribe(rule="SELECT Vehicle.Cabin.Seat.Row2.Pos1.Position")
# If needed, the subscription can be stopped like this
await self.rule.subscription.unsubscribe()
auto subscription =
subscribeDataPoints(
velocitas::QueryBuilder::select(Vehicle.Cabin.Seat.Row(2).Pos(1).Position).build())
->onItem(
[this](auto&& item) { onSeatPositionChanged(std::forward<decltype(item)>(item)); });
// If needed, the subscription can be stopped like this:
subscription->cancel();
void onSeatPositionChanged(const DataPointMap_t datapoints) {
logger().info("SeatPosition has changed to: "+ datapoints.at(Vehicle.Cabin.Seat.Row(2).Pos(1).Position)->asFloat().get());
}
Subscribe to a single datapoint with a filter
Vehicle.Cabin.Seat.Row(2).Pos(1).Position.where(
"Cabin.Seat.Row2.Pos1.Position > 50")
.subscribe(on_seat_position_change)
def on_seat_position_change(data: DataPointReply):
position = data.get(Vehicle.Cabin.Seat.Row2.Pos1.Position).value
print(f'Seat position changed to {position}')
# Call to broker
# Subscribe(rule="SELECT Vehicle.Cabin.Seat.Row2.Pos1.Position WHERE Vehicle.Cabin.Seat.Row2.Pos1.Position > 50")
auto query = QueryBuilder::select(Vehicle.Cabin.Seat.Row(2).Pos(1).Position)
.where(vehicle.Cabin.Seat.Row(2).Pos(1).Position)
.gt(50)
.build();
subscribeDataPoints(query)->onItem([this](auto&& item){onSeatPositionChanged(std::forward<decltype(item)>(item));}));
void onSeatPositionChanged(const DataPointMap_t datapoints) {
logger().info("SeatPosition has changed to: "+ datapoints.at(Vehicle.Cabin.Seat.Row(2).Pos(1).Position)->asFloat().get());
}
// Call to broker:
// Subscribe(rule="SELECT Vehicle.Cabin.Seat.Row2.Pos1.Position WHERE Vehicle.Cabin.Seat.Row2.Pos1.Position > 50")
Publish & subscribe messaging
The SDK supports publishing messages to a MQTT broker and subscribing to topics of a MQTT broker. By leveraging the dapr pub/sub building block for this purpose, the low-level MQTT communication is abstracted away from the Vehicle App
developer. Especially the physical address and port of the MQTT broker is no longer configured in the Vehicle App
itself, but rather is part of the dapr configuration, which is outside of the Vehicle App
.
Publish MQTT Messages
MQTT messages can be published easily with the publish_mqtt_event()
method, inherited from VehicleApp
base class:
await self.publish_mqtt_event(
"seatadjuster/currentPosition", json.dumps(req_data))
publishToTopic("seatadjuster/currentPosition", "{ \"position\": 40 }");
Subscribe to MQTT Topics
In Python subscriptions to MQTT topics can be easily established with the subscribe_topic()
annotation. The annotation needs to be applied to a method of the Vehicle App
class. In C++ the subscribeToTopic()
method has to be called. Callbacks for onItem
and onError
can be set. The following examples provide some more details.
@subscribe_topic("seatadjuster/setPosition/request")
async def on_set_position_request_received(self, data: str) -> None:
data = json.loads(data)
logger.info("Set Position Request received: data=%s", data)
# include <fmt/core.h>
# include <nlohmann/json.hpp>
subscribeToTopic("seatadjuster/setPosition/request")->onItem([this](auto&& item){
const auto jsonData = nlohmann::json::parse(item);
logger().info(fmt::format("Set Position Request received: data={}", jsonData));
});
Under the hood, the vehicle app creates a grpc endpoint on port 50008
, which is exposed to the dapr middleware. The dapr middleware will then subscribe to the MQTT broker and forward the messages to the vehicle app.
To change the app port, set it in the main()
method of the app:
from sdv import conf
async def main():
conf.DAPR_APP_PORT = <your port>
// c++ does not use dapr for Pub/Sub messaging at this point
Vehicle App abstraction
Vehicle Apps
are inherited from the VehicleApp
base class. This enables the Vehicle App
to use the Publish & subscribe messaging and the KUKSA Data Broker.
The Vehicle Model
instance is passed to the constructor of the VehicleApp
class and should be stored in a member variable (e.g. self.vehicle
for Python, std::shared_ptr<Vehicle> m_vehicle;
for C++), to be used by all methods within the application.
Finally, the run()
method of the VehicleApp
class is called to start the Vehicle App
and register all MQTT topic and Data Broker subscriptions.
Implementation detail
In Python, the subscriptions are based onasyncio
, which makes it necessary to call the run()
method with an active asyncio event_loop
.
A typical skeleton of a Vehicle App
looks like this:
class SeatAdjusterApp(VehicleApp):
def __init__(self, vehicle: Vehicle):
super().__init__()
self.vehicle = vehicle
async def main():
# Main function
logger.info("Starting seat adjuster app...")
seat_adjuster_app = SeatAdjusterApp(vehicle)
await seat_adjuster_app.run()
LOOP = asyncio.get_event_loop()
LOOP.add_signal_handler(signal.SIGTERM, LOOP.stop)
LOOP.run_until_complete(main())
LOOP.close()
# include "VehicleApp.h"
# include "vehicle_model/Vehicle.h"
using namespace velocitas;
class SeatAdjusterApp : public VehicleApp {
public:
SeatAdjusterApp()
: VehicleApp(IVehicleDataBrokerClient::createInstance("vehicledatabroker")),
IPubSubClient::createInstance("localhost:1883", "SeatAdjusterApp"))
{}
private:
::Vehicle Vehicle;
};
int main(int argc, char** argv) {
example::SeatAdjusterApp app;
app.run();
return 0;
}
Further information
- Tutorial: Setup and Explore Development Enviroment
- Tutorial: Vehicle Model Creation
- Tutorial: Vehicle App Development
- Tutorial: Develop and run integration tests for a Vehicle App
1.2.2 - Vehicle Abstraction Layer (VAL)
Introduction
The Eclipse Velocitas project is using the Vehicle Abstraction Layer (VAL) of the Eclipse KUKSA project, also called KUKSA.VAL. It is a reference implementation of an abstraction layer that allows Vehicle applications to interact with signals and services in the vehicle. It currently consists of a data broker, a CAN feeder, and a set of example services.
Architecture
The image below shows the main components of the Vehicle Abstraction Layer (VAL) and its relation to the Velocitas Development Model.

Overview of the vehicle abstraction layer architecture
KUKSA Data Broker
The KUKSA Data Broker is a gRPC service acting as a broker of vehicle data / data points / signals. It provides central access to vehicle data points arranged in a - preferably standardized - vehicle data model like the COVESA Vehicle Signal Specification (VSS) or others. It is implemented in Rust, can run in a container and provides services to get datapoints, update datapoints and for subscribing to datapoints. Filter- and rule-based subscriptions of datapoints can be used to reduce the number of updates sent to the subscriber.
Data Feeders
Conceptually, a data feeder is a provider of a certain set of data points to the data broker. The source of the contents of the data points provided is specific to the respective feeder.
As of today, the Vehicle Abstraction Layer contains a generic CAN feeder (KUKSA DBC Feeder) implemented in Python,
which reads data from a CAN bus based on mappings specified in e.g. a CAN network description (dbc) file.
The feeder uses a mapping file and data point meta data to convert the source data to data points and injects them into the data broker using its Collector
gRPC interface.
The feeder automatically reconnects to the data broker in the event that the connection is lost.
Vehicle Services
A vehicle service offers a gRPC interface allowing vehicle apps to interact with underlying services of the vehicle. It can provide service interfaces to control actuators or to trigger (complex) actions, or provide interfaces to get data. It communicates with the Hardware Abstraction to execute the underlying services, but may also interact with the data broker.
The KUKSA.VAL Services repository contains examples illustrating how such kind of vehicle services can be built.
Hardware Abstraction
Data feeders rely on hardware abstraction. Hardware abstraction is project/platform specific. The reference implementation relies on SocketCAN and vxcan, see https://github.com/eclipse/kuksa.val.feeders/tree/main/dbc2val. The hardware abstraction may offer replaying (e.g., CAN) data from a file (can dump file) when the respective data source (e.g., CAN) is not available.
Information Flow
The vehicle abstraction layer offers an information flow between vehicle networks and vehicle services. The data that can flow is ultimately limited to the data available through the Hardware Abstraction, which is platform/project-specific. The KUKSA Data Broker offers read/subscribe access to data points based on a gRPC service. The data points which are actually available are defined by the set of feeders providing the data into the broker. Services (like the seat service) define which CAN signals they listen to and which CAN signals they send themselves, see documentation. Service implementations may also interact as feeders with the data broker.
Data flow when a Vehicle Application uses the KUKSA Data Broker.

Architectural representation of the KUKSA data broker data flow
Data flow when a Vehicle Application uses a Vehicle Service.

Architectural representation of the vehicle service data flow
Source Code
Source code and build instructions are available in the respective KUKSA.VAL repositories:
Guidelines
- Guidelines for best practices on how to specify a gRPC-based service interface and on how to implement a vehicle service can be found in the kuksa.val.services repository.
1.3 - Deployment Model
The Velocitas project uses a common deployment model. It uses OCI-compliant containers to increase the flexibility for the support of different programming languages and runtimes, which accelerates innovation and development. OCI-compliant containers also allow for a standardized yet flexible deployment process, which increases the ease of operation. Using OCI-compliant is portable to different architectures as long as there is support for OCI-compliant containers on the desired platform (e.g., like a container runtime for arm32, arm64 or amd64).
Guiding principles
The deployment model is guided by the following principles
- Applications are provided as OCI-compatible container images.
- The container runtime offers a Kubernetes-compatible control plane and API to manage the container lifecycle.
- Helm charts are used as deployment descriptor specification.
The template projects provided come with a preconfigured developer toolchain that accelerates the development process. The developer toolchain ensures an easy creation through a high-degree of automation, of all required artifacts which are needed to follow the Velocitas principles.
Testing your container during development
The Velocitas project provides developers with a repository template and devcontainer that contains everything to build a containerized version of your app locally and test it. Check out our tutorial e.g., for python template (https://github.com/eclipse-velocitas/vehicle-app-python-template) to learn more.
Automated container image builds
Velocitas uses GitHub workflows to automate the creation of your containerized application. A workflow is started with every increment of your application code that you push to your GitHub repository. The workflow creates a containerized version of your application and stores this container image in a registry. Further actions are carried out using this container (e.g., integration tests).
The workflows are set up to support multi-platform container creation and generate container images for amd64 and arm64 out of the box. This provides a great starting point for developers and lets you add additional support for further platforms easily.
Further information
1.3.1 - Build and Release Process
The Velocitas project provides a two-stage process for development, continuous integration, and release of a new version of a Vehicle App.
-
Stage 1 - Build & Test On every new push to the
main
branch or every update to a pull request, a GitHub workflow is automatically executed to build your application as a container (optionally for different platforms), runs automated tests and code quality checks, and stores all results as GitHub artifacts for future reference with a default retention period of 90 days.The workflow provides quick feedback during development and improves efficient collaboration.
-
Stage 2 - Release Once the application is ready to be released in a new version, a dedicated release workflow is automatically executed as soon as you create a new release via GitHub.
The release workflow bundles all relevant images and artifacts into one tagged set of files and push it to the GitHub Container Registry. In addition, it makes the information ready to be used for quality assurance and documentation accordingly. The image in GitHub Container Registry can be used in your preferred Over-The-Air (OTA) update system of your choice.
The drawing below illustrates the different workflows, actions and artifacts that are automatically created for you. Both workflows are intended as a sensible baseline and can be extended and adapted to your own project’s needs.
CI Workflow (ci.yml)
The Continuous Integration (CI) workflow
is triggered on every commit to the main branch or when creating/updating a pull request and contains a set of actions to achieve the following objectives:
- Building a container for the app - actions create a containerized version of the Vehicle App, the actions also support creating an image for multiple platforms and CPU architectures.
- Scanning for vulnerabilities - actions scan your code and container for vulnerabilities and in case of findings the workflow will be marked as “failed”.
- Running unit tests & code coverage - actions run unit tests and calculate code coverage for your application, in case of errors or unsatisfactory code coverage, the workflow will be marked as “failed”.
- Running integration tests - actions provision a runtime instance and deploy all required services as containers together with your containerized application to allow for automatically executing integration test cases. In case the test cases fail, the workflow will be marked as “failed”.
- Storing scan & test results as GitHub action artifacts - actions store results from the previously mentioned actions for further reference or download as Github Action Artifacts.
- Storing container images to GitHub action artifacts - at the end of the workflow, the container images created are stored in a Github Action Artifacts so that they can be referenced by the release-workflow later.
Check out the example GitHub workflows in our repositories for python
Release Workflow (release.yml)
The Release workflow
is triggered as soon as the main
branch is ready for release and the Vehicle App developer creates a new GitHub release. This can be done manually through the GitHub UI.
On creating a new release with a specific new version, GitHub creates a tag and automatically runs the Release workflow
defined in .github/workflows/release.yml, given that CI workflow
has run successfully for the current commit on the main branch.
The set of actions included in the Release workflow
cover the objective:
- Generating and publishing QA information - actions load the QA information from GitHub artifacts stored for the same commit reference and verify it. Additionally, release documentation is generated and added to the GitHub release. If there is no information available for the current commit, the release workflow will fail.
- Pull & label container image - actions pull the Vehicle App container image based on the current commit hash from the GitHub artifacts and label it with the specified tag version. If the image cannot be found, the workflow will fail.
- Publish as GitHub pages - all information from the release together with the project’s documentation is built as a static page using hugo. the result can be published as a GitHub page in your repository.
GitHub Actions artifacts
GitHub Actions artifacts are used for storing data, which is generated by the CI workflow
and referenced by the Release workflow
. This saves time during workflow runs because we don’t have to create artifacts multiple times.
GitHub Actions artifacts always have a retention period, which is 90 days by default. This may be configured differently in the specific GitHub organization. After this period, the QA info gets purged automatically. In this case, a re-run of the CI workflow would be required to regenerate all QA info neede for creating a released.
Container Registry
The GitHub container registry is used for storing container images, which are generated by the CI workflow
as GitHub artifacts and leveraged by the Release workflow
.
The GitHub container registry does not have an automatic cleanup and keeps container images as long as they are not deleted. It is recommended that you automate the removal of older images to limit storage size and costs.
Versioning
Vehicle App image versions are set to the Git tag name during release. Though any versioning scheme can be adopted, the usage of semantic versions is recommended.
If the tag name contains a semantic version, the leading v
will be trimmed.
Example: A tag name of v1.0.0
will lead to version 1.0.0
of the Vehicle App container.
Maintaining multiple versions
If there is a need to maintain multiple versions of a Vehicle App, e.g., to hotfix the production version while working on a new version at the same time or to support multiple versions in production, create and use release branches
.
The release process would be the same as described in the overview, except that a release branch (e.g., release/v1.0
) is created before the release step and the GitHub release is based on the release
branch rather than the main
branch. For hotfixes, release branches may be created retroactively from the release tag, if needed.
Further information
- Tutorial: Deploy a Vehicle App with Helm
1.4 - Repository Overview
Repository | Description |
---|---|
vehicle-app-python-template | GitHub Template repository containing an exemplary Vehicle App that uses an exemplary SDK to provide access to vehicle data points and methods. The sample SDK extends the sdv-vehicle-app-python-sdk. In addition the template repository contains the development environment for Visual Studio Code for a Vehicle App as well as the CI/CD workflows that can be used as blueprint for your own Vehicle App written in Python. |
vehicle-app-python-sdk | Provides basic functionality to write an SDK to allow access to vehicle data points and method. This includes publishing & subscribe messaging, VehicleApp API, vehicle data model ontology and function-based query & rule support. |
vehicle-model-python | Basic vehicle model for Python generated from VSS with addition of some specialized vehicle services. |
vehicle-app-cpp-template | GitHub Template repository containing an exemplary Vehicle App that uses an exemplary SDK to provide access to vehicle data points and methods. The sample SDK extends the sdv-vehicle-app-cpp-sdk. In addition the template repository contains the development environment for Visual Studio Code for a Vehicle App as well as the CI/CD workflows that can be used as blueprint for your own Vehicle App written in C++. |
vehicle-app-cpp-sdk | Provides basic functionality to write an SDK to allow access to vehicle data points and method. This includes publishing & subscribe messaging, VehicleApp API, vehicle data model ontology and function-based query & rule support. |
vehicle-model-cpp | Basic vehicle model for C++ generated from VSS with addition of some specialized vehicle services. |
kuksa.val | Is a part of the Vehicle Abstraction Layer (VAL) of the Eclipse KUKSA project and provides the KUKSA Data Broker. The KUKSA Data Broker offers data points available in the vehicle to the Vehicle Apps semantically aligned to a data model like the Vehicle Signal Specification (VSS). |
kuksa.val.feeders | The KUKSA DBC Feeder is a generic data feeder that reads data from the vehicle’s CAN bus defined by a DBC file, maps them to a set of data points (e.g. according to the VSS) and feeds it into the Data Broker. |
kuksa.val.services | Provides exemplary vehicle services and respective implementations that illustrates how to interact with in-vehicle components and services via an unified access that is semantically described e.g. in the Vehicle Service Catalog (VSC). |
release-documentation-action | GitHub Action to generate a release documentation from the CI workflow output by rendering it to markdown files so that this can be easily published with GitHub Pages. |
license-check | GitHub Action to collect the licenses of the used components and can be configured to fail with an error message on invalid licenses. |
vehicle-model-generator | Provides basic functionality to create a vehicle model from the given vspec specification for the target programming language. |
2 - Tutorials
2.1 - Quickstart
The following information describes how to setup and configure the Development Container (DevContainer), and how to build, customize and test the sample Vehicle App, which is included in this repository. You will learn how to use the Vehicle App SDK, how to interact with the vehicle API and how to do CI/CD using the pre-configured GitHub workflows that come with the repository.
Once you have completed all steps, you will have a solid understanding of the Development Workflow and you will be able to reuse the Template Repository for your own Vehicle App develpment project.
Note
Before you start, we recommend that you familiarize yourself with our basic concept to understand the terms mentioned.Creating Vehicle App Repository
For the orginization and Vehicle App repository the name MyOrg/MyFirstVehicleApp
is used as a reference during the rest of the document.
Create your own repository copy from the template repository of your choice Python/C++ by clicking the green button Use this template
. You don’t have to include all branches. For more information on Template Repositories take a look at this GitHub Tutorial.
Starting Development Environment
In the following you will learn different possibilities to work with the repo. Basically you can work on your own machine using just Visual Studio Code or you can set up the environment on a remote agent, using GitHub Codespaces.
Visual Studio Code
The Visual Studio Code Development Containers makes it possible to package a complete Vehicle App development environment, including Visual Studio Code extensions, Vehicle App SDK, Vehicle App runtime and all other development & testing tools into a container that is then started within your Visual Studio Code session.
To be able to use the DevContainer, you have to make sure that you fulfill the following prerequisites:
-
Install Docker Engine / Docker Desktop
-
Install Visual Studio Code
-
Add Remote-Containers extension via the marketplace or using the command line
code --install-extension ms-vscode-remote.remote-containers
Proxy configuration
A non proxy configuration is used by default. If you are working behind a corporate proxy you will need to specify proxy settings: Working behind a proxyWith following steps you will clone and set up your development environment on your own machine using just Visual Studio Code.
- Clone the repo locally using your favorite Git tooling
- Start Visual Studio Code
- Select
Open Folder
from theFile
menu - Open the root of the cloned repo
- A popup appears on the lower left side of Visual Studio Code. If the popup does not appear, you can also hit F1 and run the command
Dev-Containers: Open Folder in Container
- Click on
Reopen in Container
- Wait for the container to be set up
The first time initializing the container will take a few minutes to build the image and to provision the tools inside the container.
Note
If the devContainer fails to build successfully (e.g. due to network issues), then wait for the current build to finish, press F1 and run the command
Dev-Containers: Rebuild Container Without Cache
The devContainer is using the docker-in-docker-feature to run docker containers within the container. Currently, this feature has the limitation that only one instance of a devContainer with the feature enabled can be running at the same time.
Codespaces
Another possibility to use your newly created repository is via GitHub Codespaces. You can either try it out directly in the browser or also use it inside Visual Studio Code. The main thing to remember is that everything is executed on a remote agent and the browser or Visual Studio Code just act as frontends.
To get started with Codespaces, you just have to follow a few steps:
- Open your repository on GitHub (e.g. https://github.com/MyOrg/MyFirstVehicleApp)
- Click on the green
Code
button and select Codespaces on the top - Configure your Codespace if needed (defaults to the main branch and a standard agent)
- Click on
create
A new window will open where you see the logs for setting up the container. On this window you could now also choose to work with Visual Studio Code. The environment remains on a remote agent and Visual Studio Code establishes a connection to this machine.
Once everything is set up in the Codespace, you can work with it in the same way as with the normal DevContainer inside Visual Studio Code.
Note
Be careful with using Codespaces in browser and Visual Studio Code locally at the same time: Tasks that are started using a browser session will not show in Visual Studio Code environment and vice versa. This can lead to problems using the prepared Tasks-scripts.Starting runtime services
The runtime services (like KUKSA Data Broker or Vehicle Services) are required to develop vehicle apps and run integration tests.
A Visual Studio Code task called Start Vehicle App runtime
is available to run these in the correct order.
- Press F1
- Select command
Tasks: Run Task
- Select
Start VehicleApp runtime
- Choose
Continue without scanning the output
You should see the tasks run-mosquitto
, run-vehicledatabroker
, run-vehicleservices
and run-feedercan
being executed in the Visual Studio Code output panel.
More information about the tasks are available here.
Debugging Vehicle App
Now that the runtime services are all up and running, let’s start a debug session for the Vehicle App as next step.
- Open the main source file and set a breakpoint in the given method:
- Python main source file:
/app/src/main.py
, set breakpoint in method:on_get_speed_request_received
- C++: Continue on the
Seat Adjuster
tab.
- Python main source file:
- Press F5 to start a debug session of the Vehicle App and see the log output on the
DEBUG CONSOLE
To trigger this breakpoint, let’s send a message to the Vehicle App using the mqtt broker that is running in the background.
- Open
VSMqtt
extension in Visual Studio Code and connect tomosquitto (local)
- Set
Subscribe Topic
=sampleapp/getSpeed/response
and click subscribe - Set
Publish Topic
=sampleapp/getSpeed
- Press publish with an empty payload field.
For Python: Follow the guide provided in: Import examples and import seat-adjuster
.
For C++: Continue with the next steps.
- Open the main source file and set a breakpoint in the given method:
- Python main source file:
/app/src/main.py
, set breakpoint in method:on_set_position_request_received
- C++ main source file:
/app/src/VehicleApp.cpp
, set breakpoint in method:onSetPositionRequestReceived
- Python main source file:
- Press F5 to start a debug session of the Vehicle App and see the log output on the
DEBUG CONSOLE
To trigger this breakpoint, let’s send a message to the Vehicle App using the mqtt broker that is running in the background.
-
Open
VSMqtt
extension in Visual Studio Code and connect tomosquitto (local)
-
Set
Subscribe Topic
=seatadjuster/setPosition/response
and click subscribe -
Set
Subscribe Topic
=seatadjuster/currentPosition
and click subscribe -
Set
Publish Topic
=seatadjuster/setPosition/request
-
Set and publish a dummy payload:
{ "position": 300, "requestId": "xyz" }
Triggering CI Workflow
The provided GitHub workflows are used to build the container image for the Vehicle App, run unit and integration tests, collect the test results and create a release documentation and publish the Vehicle App. A detailed description of the workflow you can find here.
By pushing a change to GitHub the CI Workflow will be triggered:
-
Make modification in any of your files
-
Commit and push your change
git add . git commit -m "removed emtpy line" git push
To see the results open the Actions
page of your repository on GitHub, go to CI Workflow
and check the workflow output.
Releasing Vehicle App
Now that the CI Workflow
was successful, you are ready to build your first release. Your goal is to build a ready-to-deploy container image that is published in the GitHub container registry
- Open the
Code
page of your repository on GitHub - Click on
Create a new release
in the Releases section on the right side - Enter a version, e.g. v1.0.0, and click on
Publish release
- GitHub will automatically create a tag using the version number
The provided release workflow will be triggered by the release. The release workflow creates a release documentation and publish the container image of the Vehicle App to the GitHub container registry. Open Actions
on the repoitory and see the result.
Deploying Vehicle App
After releasing the Vehicle App to the GitHub container registry you might ask how to bring the Vehicle App on a device and have the required Runtime Stack on the device. Here Eclipse Leda comes into the game.
Please checkout the documentation of Eclipse Leda to get more information.
Next steps
- Tutorial: Creating a Vehicle Model
- Tutorial: Create a Vehicle App
- Tutorial: Develop and run integration tests for a Vehicle App
2.1.1 - Import examples
This guide will help you to import examples provided by the SDK package into your template repository.
A Visual Studio Code task called Import example app from SDK
is available in the /.vscode/tasks.json
which can replace your /app
directory in your template repository with some example Vehicle Apps from the SDK package.
/app
directory, commit or stash changes before importing the example app.
- Press F1
- Select command
Tasks: Run Task
- Select
Import example app from SDK
- Choose
Continue without scanning the output
- Select
seat-adjuster
Run the example Vehicle App
The launch settings are already prepared for the VehicleApp
in the template repository /.vscode/launch.json
. The configuration is meant to be as generic as possible to make it possible to run all provided example apps.
Every example app comes with its own /app/AppManifest.json
to see which Vehicle Services are configured and needed as a dependency.
To start the app: Just press F5 to start a debug session of the example Vehicle App.
2.1.2 - Working behind proxy
We know what a pain and how time consuming it can be to setup your environment behind a cooperate proxy. This guide will help you to set it up correctly.
Be aware that correct proxy configuration depends on the setup of your organisation and of course of your personal development environment (hardware, OS, virtualization setup, …). So, we most probably do not cover all issues out there in the developers world. So, we encourage you to share hints and improvements with us.
HTTP(s) proxy server
Install and configure the proxy server as recommented or required by your company. For example you could use PX, which is a HTTP(s) proxy server that allows applications to authenticate through an NTLM or Kerberos proxy server, typically used in corporate deployments, without having to deal with the actual handshake. Px leverages Windows SSPI or single sign-on and automatically authenticates using the currently logged in Windows user account. It is also possible to run Px on Windows, Linux and MacOS without single sign-on by configuring the domain, username and password to authenticate with. (Source: PX)
- Install your HTTP(s) proxy server
- Start your HTTP(s) proxy server
Docker Desktop
You need to install Docker Desktop using the right version. As we recognized a proxy issue in Docker Desktop #12672 we strongly recomment to use a Docker Desktop version >= 4.8.2. In case you have an older version on your machine please update to the current version.
In the next step you need to enter your proxy settings:
- Open Docker Desktop and go to the Settings
- From
Resources
, selectProxies
- Enable
Manual proxy configuration
- Enter your proxy settings, this depends on the configuration you did while setting up your proxy tool e.g.:
- Web Server (HTTP):
http://localhost:3128
- Secure Web Server (HTTPS):
http://localhost:3128
- Bypass:
localhost,127.0.0.1
- Web Server (HTTP):
- Apply & Restart.
Docker daemon
You also have to configure the Docker daemon, which is running the containers basically, to forward the proxy settings. For this you have to add the proxy configuration to the ~/.docker/config.json
. Here is an example of a proper config (Port and noProxy settings might differ for your setup):
{
"proxies":{
"default":{
"httpProxy":"http://host.docker.internal:3128",
"httpsProxy":"http://host.docker.internal:3128",
"noProxy":"host.docker.internal,localhost,127.0.0.1"
}
}
}
{
"proxies":{
"default":{
"httpProxy":"http://172.17.0.1:3128",
"httpsProxy":"http://172.17.0.1:3128",
"noProxy":"host.docker.internal,localhost,127.0.0.1"
}
}
}
For more details see: Docker Documentation
Environment Variables
It is required to set the following environment variables:
HTTP_PROXY
- proxy server, e.g.http://localhost:3128
HTTPS_PROXY
- secure proxy server, e.g.http://localhost:3128
set
setx HTTP_PROXY "http://localhost:3128"
setx HTTPS_PROXY "http://localhost:3128"
echo "export HTTP_PROXY=http://localhost:3128" >> ~/.bash_profile
echo "export HTTPS_PROXY=http://localhost:3128" >> ~/.bash_profile
source ~/.bash_profile
echo "export HTTP_PROXY=http://localhost:3128" >> ~/.bash_profile
echo "export HTTPS_PROXY=http://localhost:3128" >> ~/.bash_profile
source ~/.bash_profile
Solving issues with TLS (SSL) certificate validation using https connections from containers
If you are behind a so-called intercept proxy (which you most probably are), you can run into certificate issues: Your corporate proxy works as a “man-in-the-middle” to be able to check the transfered data for malicious content. Means, there is a protected connection between the application in your local runtime environment and the proxy and another from the proxy to the external server your application wants to interact with.
For the authentication corporate proxies often use self-signed certificates (certificates which are not signed by a (well-known official) certificate authority. Those kind of certificates need to be added to the database of trusted certificates of your local runtime environment. This task is typically handled by the IT department of your corporation (if the OS and software installed on it is managed by them) and you will not run into problems, normally.
If it comes to executing containers, those are typically not managed by your IT department and the proxy certificate(s) is/are missing. So, you need to find a way to install those into the (dev) container you want to execute.
See (one of) those articles to get how to achieve that: https://www.c2labs.com/post/overcoming-proxy-issues-with-docker-containers https://technotes.shemyak.com/posts/docker-behind-ssl-proxy/
Troubleshooting
Case 1:
If you experience issues during initial DevContainer build, clean all images and volumes otherwise cache might be used:
- Open Docker Desktop
- From
Troubleshooting
chooseClean / Purge data
2.2 - Prototyping Integration
The open and web based playground.digital.auto offers a rapid prototyping environment to explore and validate ideas of a vehicle app which interact with different vehicle sensors and actuators via standardized APIs specified by the COVESA Vehicle Signal Specification (VSS) without custom setup requirements. It provides the opportunity:
- To browse, navigate and enhance the vehicle signals (sensors, actuators and branches) in the Vehicle API Catalogue mapped to a 3D model of the vehicle
- To build vehicle app prototypes in the browser using Python and the Vehicle API Catalogue
- To test the vehicle app prototype in a dashboard with 3D animation for API calls
- To create new plugins, which usually represent UX widgets or remote server communication to enhance the vehicle mockup experience in the playground
- To collect and evaluate user feedback to prioritize your development portfolio
Prototype an idea of a Vehicle App
As first step open playground.digital.auto, select Get Started in the Prototyping section of the landing page and use the Vehicle Model of your choice.
You now have the option to browse existing vehicle signals for the selected vehicle model which you can use for prototyping your Vehicle App by clicking on Vehicle APIs.
The next step would be to prototype your idea. To do so:
- Click on Prototypes (in the top right toolbar),
- Create a new prototype, by clicking on New Prototype and filling out the information or select one from the list,
- Click on the Open button (right side),
- Go to the Code section and
- Start your prototype right away.
To test your prototype go to the Run section, which opens a dashboard consisting all vehicle and application components similar to mockups. The control center on the right side has an integrated terminal showing all of your prototyped outputs as well as a list of all called VSS API’s. The Run button executes all your prototype code from top to button. The Debug button allows you to step through your prototype line for line.
To get started quickly, the digital.auto team has added a number of widgets to simulate related elements of the vehicle – like doors, seats, light, etc. – and made them available in the playground.
Feel free to add your own Plugins with addition widgets for additional car features (maybe an antenna waving a warm “welcome”…?).
Transfer your prototype into a Velocitas Vehicle App
In the previous step you started with envision and prototyping your Vehicle App idea and tested it against mocked vehicle components in Digital.Auto. To transfer the prototype from playground.digital.auto to your development environment and test it with real Vehicle Services we provide a project generator. This generator allows you to generate a Vehicle App GitHub repository using your prototype code based on our vehicle-app-python-template.
In the ‘Code’ section of your prototype in the playground.digital.auto you have the Button ‘Create Eclipse Velocitas Project’.
If you press this button you will be forwarded to GitHub to login with your GitHub Account and authorize velocitas-project-generator to create the repository for you. After you authorized the project generator you will be redirected to the playground.digital.auto and asked for a repository name (Which also is the app’s name). After pressing “Create repository” the project generator takes over your prototype code, adapts it to the structure in the vehicle-app-python-template and creates a new private repository under your GitHub User.
After the generation of the repository is completed a pop-up dialogue with the URL of your repository will be displayed. Among other things the newly created repository will contain:
- /app/src/main.py containing your modified prototype code
- /app/AppManifest.json with definition of required services
- /app/requirements.txt with definition of dependencies
- /.devcontainer/ required scripts to install every prerequisite in Microsoft Visual Studio Code
- /.github/workflows/ with all required CI/CD pipelines to build, test and deploy the vehicle application as container image to the GitHub container registry
Your prototype Vehicle App transferred into a GitHub repository is now ready to be extended. Clone your newly created repository and open the Vehicle App in Microsoft Visual Studio Code and start to extend it. More information you can find here:
CodeQL Analysis
By default the template repository comes with automated CodeQL Analysis to automatically detect common vulnerabilities and coding errors. It is available if you have a GitHub advanced security license in your org or if your repository is public. To change visibility: Go to your repository settings -> General -> Danger Zone (at the bottom) -> Change repository visibility -> Change visibility to public.Manual Adaptions
Since the project-generator identifies typical python syntax and patterns out of the prototype there could be several cases where manual code adaptions cannot be excluded.
Most of the prototype code is extracted into the on_start
-method of velocitas.
- Prototyped local variables which need to be accessed e.g. in callback methods need to be global (move them out above the VehicleApp class).
- Depending on how using variables in a Print/Logging statement is implemented, statements need to be adapted in Velocitas.
- Have in mind that Velocitas uses the standard VSS model. If you use custom signals in your prototype, you have to find similar standard signals to use in Velocitas.
2.2.1 - Service Integration
Services can make sure, that when you write a VSS datapoint, something is actually happening. Eclipse Velocitas has an example seat, hvac or light service. If your Vehicle App makes use of e.g. Vehicle.Cabin.Seat.Row1.Pos1.Position
, Vehicle.Body.Lights.IsBackupOn
, Vehicle.Body.Lights.IsHighBeamOn
, Vehicle.Body.Lights.IsLowBeamOn
you are in for some real action. To learn more, visit Vehicle Services.
You can validate the interaction of the service with your Vehicle App by adding a Vehicle Service to the /app/AppManifest.json
, start the services locally and debug it.
Modify services
For more advanced usage you can als try modifying existing services. Check out the seat service for example, modify it and integrate it into your Vehicle App repository.
Create your own services
If you want to create your own service the KUKSA.val Services repository contains examples illustrating how such kind of vehicle services can be built. You need to write an application that talks to KUKSA.val listening to changes of a target value of some VSS datapoint and then do whatever you want. You can achieve this by using the KUKSA.val GRPC API with any programming language of your choice (learn more about GRPC).
2.3 - Vehicle App Development
2.3.1 - Python Vehicle App Development
We recommend that you make yourself familiar with the Vehicle App SDK first, before going through this tutorial.
The following information describes how to develop and test the sample Vehicle App that is included in the template repository. You will learn how to use the Vehicle App SDK and how to interact with the Vehicle Model.
Once you have completed all steps, you will have a solid understanding of the development workflow and you will be able to reuse the template repository for your own Vehicle App development project.
Develop your first Vehicle App
This section describes how to develop your first Vehicle App. Before you start building a new Vehicle App, make sure you have already read the other manuals:
Once you have established your development environment, you will be able to start developing your first Vehicle App.
For this tutorial, you will recreate the Vehicle App that is included with the SDK repository: The Vehicle App allows to change the positions of the seats in the car and also provide their current positions to other applications.
A detailed explanation of the use case and the example is available here.
At first, you have to create the main python script called main.py
in /app/src
. All the relevant code for new Vehicle App goes there. Afterwards, there are several steps you need to consider when developing the app:
- Manage your imports
- Enable logging
- Initialize your class
- Start the app
Manage your imports
Before you start development in the main.py
you just created, it will be necessary to include the imports required, which you will understand better later through the development:
import asyncio
import json
import logging
import signal
import grpc
from sdv.util.log import ( # type: ignore
get_opentelemetry_log_factory,
get_opentelemetry_log_format,
)
from sdv.vehicle_app import VehicleApp, subscribe_topic
from sdv_model import Vehicle, vehicle # type: ignore
from sdv_model.proto.seats_pb2 import BASE, SeatLocation # type: ignore
Enable logging
The following logging configuration applies the default log format provided by the SDK and sets the log level to INFO:
logging.setLogRecordFactory(get_opentelemetry_log_factory())
logging.basicConfig(format=get_opentelemetry_log_format())
logging.getLogger().setLevel("INFO")
logger = logging.getLogger(__name__)
Initialize your class
The main class of your new Vehicle App needs to inherit the VehicleApp
provided by the SDK.
class MyVehicleApp(VehicleApp):
In class initialization, you have to pass an instance of the Vehicle Model:
def __init__(self, vehicle_client: Vehicle):
super().__init__()
self.Vehicle = vehicle_client
We save the vehicle object to use it in our app. Now, you have initialized the app and can continue developing relevant methods.
Start the app
Here’s an example of how to start the MyVehicleApp App that we just developed:
async def main():
"""Main function"""
logger.info("Starting my VehicleApp...")
vehicle_app = MyVehicleApp(vehicle)
await vehicle_app.run()
LOOP = asyncio.get_event_loop()
LOOP.add_signal_handler(signal.SIGTERM, LOOP.stop)
LOOP.run_until_complete(main())
LOOP.close()
The app is now running. In order to use it properly, we will enhance the app with more features in the next sections.
Vehicle Model
In order to facilitate the implementation, the whole vehicle is abstracted into model classes. Please check tutorial about creating models for more details about this topic. In this section, the focus is on using the models.
Import the model
The first thing you need to do to get access to the Vehicle Model. In the section about distributing a model, you got to know the different methods.
If you just want to use your model in one app, you can simply copy the classes into your /app/src
-folder. In this example, you find the classes inside the vehicle_model
-folder. As you have already seen in the section about initializing the app, we need the vehicle model
to use the app.
As you know, the model has a single Datapoint for the speed and a reference to the cabin
-model.
Accessing the speed can be done via
vehicle_speed = await self.Vehicle.Speed.get()
As the get
-method of the Datapoint-class there is a coroutine you have to use the await
keyword when using it.
If you want to get deeper inside the vehicle, to access a single seat for example, you just have to go the model-chain down:
self.DriverSeatPosition = await self.vehicle_client.Cabin.Seat.Row1.Pos1.Position.get()
Subscription to Datapoints
If you want to get notified about changes of a specific Datapoint
, you can subscribe to this event, e.g. as part of the on_start
-method in your app.
async def on_start(self):
"""Run when the vehicle app starts"""
await self.Vehicle.Cabin.Seat.Row(1).Pos(1).Position.subscribe(
self.on_seat_position_changed
)
Every Datapoint provides a .subscribe() method that allows for providing a callback function which will be invoked on every datapoint update. Subscribed data is available in the respective DataPointReply object and need to be accessed via the reference to the subscribed datapoint. The returned object is of type TypedDataPointResult
which holds the value
of the data point
and the timestamp
at which the value was captured by the data broker.
Therefore the on_seat_position_changed
callback function needs to be implemented like this:
async def on_seat_position_changed(self, data: DataPointReply):
# handle the event here
response_topic = "seatadjuster/currentPosition"
position = data.get(self.Vehicle.Cabin.Seat.Row(1).Pos(1).Position).value
# ...
Note
The SDK also supports annotations for subscribing to datapoint changes with @subscribe_data_points
defined by the whole path to the Datapoint
of interest.
@subscribe_data_points("Vehicle.Cabin.Seat.Row1.Pos1.Position")
async def on_vehicle_seat_change(self, data: DataPointReply):
response_topic = "seatadjuster/currentPosition"
response_data = {"position": data.get(self.Vehicle.Cabin.Seat.Row1.Pos1.Position).value}
await self.publish_mqtt_event(response_topic, json.dumps(response_data))
Similarly, subscribed data is available in the respective DataPointReply object and needs to be accessed via the reference to the subscribed datapoint.
Services
Services are used to communicate with other parts of the vehicle. Please read the basics about them here.
The following lines show you how to use the MoveComponent
-method of the SeatService
from the vehicle model:
location = SeatLocation(row=1, index=1)
await self.vehicle_client.Cabin.SeatService.MoveComponent(
location, BASE, data["position"]
)
In order to know which seat to move, you have to pass a SeatLocation
object as the first parameter. The second argument specifies the component to be moved. The possible components are defined in the proto-files. The last parameter to be passed into the method is the final position of the component.
Make sure to use the
await
keyword when calling service methods, since these methods are coroutines.
MQTT
Interaction with other Vehicle Apps or the cloud is enabled by using Mosquitto MQTT Broker. The MQTT broker runs inside a docker image, which is started automatically after starting the DevContainer.
In the quickstart section about the Vehicle App, you already tested sending MQTT messages to the app.
In the previous sections, you generally saw how to use Vehicle Models
, Datapoints
and GRPC Services
. In this section, you will learn how to combine them with MQTT.
In order to receive and process MQTT messages inside your app, simply use the @subscribe_topic
annotations from the SDK for an additional method on_set_position_request_received
you have to implement:
@subscribe_topic("seatadjuster/setPosition/request")
async def on_set_position_request_received(self, data_str: str) -> None:
data = json.loads(data_str)
response_topic = "seatadjuster/setPosition/response"
response_data = {"requestId": data["requestId"], "result": {}}
# ...
The on_set_position_request_received
method will now be invoked every time a message is published to the subscribed topic "seatadjuster/setPosition/response"
. The message data (string) is provided as parameter. In the example above the data is parsed to json (data = json.loads(data_str)
).
In order to publish data to topics, the SDK provides the appropriate convenience method: self.publish_mqtt_event()
which will be added to the on_seat_position_changed
callback function from before.
async def on_seat_position_changed(self, data: DataPointReply):
response_topic = "seatadjuster/currentPosition"
position = data.get(self.Vehicle.Cabin.Seat.Row(1).Pos(1).Position).value
await self.publish_mqtt_event(
response_topic,
json.dumps({"position": position}),
)
The above example illustrates how one can easily publish messages. In this case, every time the seat position changes, the new position is published to seatadjuster/currentPosition
Your main.py
should now have a full implementation for class MyVehicleApp(VehicleApp):
containing:
__init__()
on_start()
on_seat_position_changed()
on_set_position_request_received()
and last but not least a main()
-method to run the app.
Check the seat-adjuster
example to see a more detailed implementation including error handling.
UnitTests
Unit testing is an important part of the development, so let’s have a look at how to do that. You can find some example tests in /app/tests/unit
.
First, you have to import the relevant packages for unit testing and everything you need for your implementation:
from unittest import mock
import pytest
from sdv.vehicle_app import VehicleApp
from sdv_model.Cabin.SeatService import SeatService # type: ignore
from sdv_model.proto.seats_pb2 import BASE, SeatLocation # type: ignore
@pytest.mark.asyncio
async def test_for_publish_to_topic():
# Disable no-value-for-parameter, seems to be false positive with mock lib
# pylint: disable=no-value-for-parameter
with mock.patch.object(
VehicleApp, "publish_mqtt_event", new_callable=mock.AsyncMock, return_value=-1
):
response = await VehicleApp.publish_mqtt_event(
str("sampleTopic"), get_sample_request_data() # type: ignore
)
assert response == -1
def get_sample_request_data():
return {"position": 330, "requestId": "123456789"}
Looking at a test you notice the annotation @pytest.mark.asyncio
. This is required if the test is defined as a coroutine. The next step is to create a mock from all the external dependencies. The method takes 4 arguments: first is the object to be mocked, second the method for which you want to modify the return value, third a callable and the last argument is the return value.
After creating the mock, you can test the method and check the response. Use asserts to make your test fail if the response does not match.
See the results
Once the implementation is done, it is time to run and debug the app.
Run your App
In order to run the app make sure you have the seatservice
configured as a dependency in your ./AppManifest.json
. Read more about it in the run runtime services section.
If you want to run the app together with a Dapr sidecar and use the Dapr middleware, you have to use the “dapr run …” command to start your app:
dapr run --app-id seatadjuster --app-protocol grpc --app-port 50008 --config ./.dapr/config.yaml --components-path ./.dapr/components python3 ./app/src/main.py
You already have seen this command and how to check if it is working in the general setup.
2 parameters may be unclear in this command:
- the config file
config.yaml
- the components-path
For now, you just need to know that these parameters are needed to make everything work together.
The config.yaml has to be placed in the folder called .dapr
and has the following content:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: config
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
An important part is the enabling of the GRPC proxy, to make the communication work.
Inside the .dapr
folder you find another folder called components
. There you only find one configuration file for the MQTT communication with the following content:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
namespace: default
spec:
type: pubsub.mqtt
version: v1
metadata:
- name: url
value: "mqtt://localhost:1883"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
If you want to know more about dapr and the configuration, please visit https://dapr.io
Debug your Vehicle App
In the introduction about debugging, you saw how to start a debugging session. In this section, you will learn what is happening in the background.
The debug session launch settings are already prepared for the VehicleApp
in /.vscode/launch.json
.
"configurations": [
{
"type": "python",
"justMyCode": false,
"request": "launch",
"name": "VehicleApp",
"program": "${workspaceFolder}/app/src/main.py",
"console": "integratedTerminal",
"preLaunchTask": "dapr-VehicleApp-run",
"postDebugTask": "dapr-VehicleApp-stop",
"env": {
"DAPR_HTTP_PORT": "3500",
"DAPR_GRPC_PORT": "${input:DAPR_GRPC_PORT}",
"SERVICE_DAPR_APP_ID": "${input:SERVICE_NAME}",
"VEHICLEDATABROKER_DAPR_APP_ID": "vehicledatabroker"
}
}
]
We specify which python-script to run using the program
key. With the preLaunchTask
and postDebugTask
keys, you can also specify tasks to run before or after debugging. In this example, DAPR is set up to start the app before and stop it again after debugging. Below you can see the 2 tasks to find in /.vscode/tasks.json
.
{
"label": "dapr-VehicleApp-run",
"appId": "vehicleapp",
"appPort": 50008,
"componentsPath": "./.dapr/components",
"config": "./.dapr/config.yaml",
"appProtocol": "grpc",
"type": "dapr",
"args": [
"--dapr-grpc-port",
"50001",
"--dapr-http-port",
"3500"
],
}
{
"label": "dapr-VehicleApp-stop",
"type": "shell",
"command": [
"dapr stop --app-id vehicleapp"
],
"presentation": {
"close": true,
"reveal": "never"
},
}
Lastly, the environment variables can also be specified.
You can adapt the configuration in /.vscode/launch.json
to your needs (e.g., change the ports, add new tasks) or even add a completely new configuration for another Vehicle App.
Environment Variables can also be configured on the central /app/AppManifest.json
and read out by the launch configuration in Visual Studio Code through a preinstalled extension in the devcontainer Tasks Shell Input.
"inputs": [
{
"id": "DAPR_GRPC_PORT",
"type": "command",
"command": "shellCommand.execute",
"args": {
"useSingleResult": true,
"command": "cat ./app/AppManifest.json | jq .[].DAPR_GRPC_PORT | tr -d '\"'",
"cwd": "${workspaceFolder}",
}
},
{
"id": "SERVICE_NAME",
"type": "command",
"command": "shellCommand.execute",
"args": {
"useSingleResult": true,
"command": "cat ./app/AppManifest.json | jq .[].Name | tr -d '\"'",
"cwd": "${workspaceFolder}",
}
}
]
Once you are done, you have to switch to the debugging tab (sidebar on the left) and select your configuration using the dropdown on the top. You can now start the debug session by clicking the play button or F5. Debugging is now as simple as in every other IDE, just place your breakpoints and follow the flow of your Vehicle App.
Next steps
- Concept: SDK Overview
- Tutorial: Deploy runtime services in Kubernetes mode
- Tutorial: Start runtime services locally
- Tutorial: Creating a Python Vehicle Model
- Tutorial: Develop and run integration tests for a Vehicle App
- Concept: Deployment Model
- Tutorial: Deploy a Python Vehicle App with Helm
2.3.2 - C++ Vehicle App Development
We recommend that you make yourself familiar with the Vehicle App SDK first, before going through this tutorial.
The following information describes how to develop and test the sample Vehicle App that is included in the SDK repository. You will learn how to use the Vehicle App SDK and how to interact with the Vehicle Model.
Once you have completed all steps, you will have a solid understanding of the development workflow and you will be able to reuse the template repository for your own Vehicle App development project.
Develop your first Vehicle App
This section describes how to develop your first Vehicle App. Before you start building a new Vehicle App, make sure you have already read the other manuals:
Once you have established your development environment, you will be able to start developing your first Vehicle App.
For this tutorial, you will recreate the vehicle app that is included with the SDK repository: The Vehicle App allows to change the positions of the seats in the car and also provide their current positions to other applications.
A detailed explanation of the use case and the example is available here.
At first, you have to create the main c++ file which we will call App.cpp
in /app/src
. All the relevant code for new Vehicle App goes there. Afterwards, there are several steps you need to consider when developing the app:
- Manage your includes
- Initialize your class
- Start the app
Manage your imports
Before you start development in the App.cpp
you just created, it will be necessary to include all required files, which you will understand better later through the development:
#include "sdk/VehicleApp.h"
#include "sdk/IPubSubClient.h"
#include "sdk/IVehicleDataBrokerClient.h"
#include "sdk/Logger.h"
#include "vehicle_model/Vehicle.h"
#include <memory>
using namespace velocitas;
Initialize your class
The main class of your new Vehicle App needs to inherit the VehicleApp
provided by the SDK.
class MyVehicleApp : public VehicleApp {
public:
// <remaining code in this tutorial goes here>
private:
::Vehicle Vehicle; // this member exists to provide simple access to the vehicle model
}
In your constructor, you have to choose which implementations to use for the VehicleDataBrokerClient and the PubSubClient. By default we suggest you use the factory methods to generate the default implementations: IVehicleDataBrokerClient::createInstance
and IPubSubClient::createInstance
. These will create a VehicleDataBrokerClient which connects to the VAL via gRPC and an MQTT-based pub-sub client.
MyVehicleApp()
: VehicleApp(IVehicleDataBrokerClient::createInstance("vehicledatabroker"), // this is the dapr-app-id of the KUKSA Databroker in the VAL.
IPubSubClient::createInstance("localhost:1883", "MyVehicleApp")) // the URI to the MQTT broker and the client ID of the MQTT client.
{}
{}
Now, you have initialized the app and can continue developing relevant methods.
Start the app
Here’s an example of how to start the MyVehicleApp
app that we just developed:
int main(int argc, char** argv) {
MyVehicleApp app;
app.run();
return 0;
}
The app is now running. In order to use it properly, we will enhance the app with more features in the next sections.
Vehicle Model
In order to facilitate the implementation, the whole vehicle is abstracted into model classes. Please check tutorial about creating models for more details about this topic. In this section, the focus is on using the models.
Import the model
The first thing you need to do to get access to the Vehicle Model. In the section about distributing a model, you got to know the different methods.
If you just want to use your model in one app, you can simply copy the classes into your src
-folder. In this example, you find the classes inside the vehicle_model
-folder. As you have already seen in the section about initializing the app, we need the vehicle model
to use the app.
As you know, the model has a single Datapoint for the speed and a reference to the cabin
-model.
Accessing the speed can be done via
auto vehicleSpeedBlocking = getDataPoint(Vehicle.Speed)->await();
getDataPoint(Vehicle.Speed)->onResult([](auto vehicleSpeed){
logger().info("Got speed!");
})
getDataPoint()
returns a shared_ptr
to an AsyncResult
which, as the name implies, is the result of an asynchronous operation. We have two options to access the value of the asynchronous result. First we can use await()
and block the calling thread until a result is available or use onResult(...)
which allows us to inject a function pointer or a lambda which is called once the result becomes available.
If you want to get deeper inside the vehicle, to access a single seat for example, you just have to go the model-chain down:
auto driverSeatPosition = getDataPoint(Vehicle.Cabin.Seat.Row(1).Pos(1).Position)->await();
Subscription to Datapoints
If you want to get notified about changes of a specific DataPoint
, you can subscribe to this event, e.g. as part of the onStart
-method in your app.
void onStart() override {
subscribeDataPoints(QueryBuilder::select(Vehicle.Cabin.Seat.Row(1).Pos(1).Position).build())
->onItem([this](auto&& item) { onSeatPositionChanged(std::forward<decltype(item)>(item)); })
->onError([this](auto&& status) { onError(std::forward<decltype(status)>(status)); });
}
void onSeatPositionChanged(const DataPointsResult& result) {
const auto dataPoint = result.get(Vehicle.Cabin.Seat.Row(1).Pos(1).Position);
logger().info(dataPoint->value());
// do something with the data point value
}
The VehicleApp
class provides the subscribeDataPoints
-method which allows to listen for changes on one or many data points. Once a change in any of the data points is registered, the callback registered via AsyncSubscription::onItem
is called. Conversely, the callback registered via AsyncSubscription::onError
is called once there is any error during communication with the KUKSA data broker.
The result passed to the callback registered via onItem
is an object of type DataPointsResult
which holds all data points that have changed. Individual data points can be accessed directly by their reference: result.get(Vehicle.Cabin.Seat.Row(1).Pos(1).Position)
)
Services
Services are used to communicate with other parts of the vehicle. Please read the basics about them here.
The following few lines show you how to use the moveComponent
-method of the SeatService
you have created:
vehicle::cabin::SeatService::SeatLocation location{1, 1};
Vehicle.Cabin.SeatService.moveComponent(
location, vehicle::cabin::SeatService::Component::Base, 300
)->await();
In order to know which seat to move, you have to pass a SeatLocation
object as the first parameter. The second argument specifies the component to be moved. The possible components are defined in the proto-files. The last parameter to be passed into the method is the final position of the component.
Make sure to call the
await()
method when calling service methods or register a callback viaonResult()
otherwise you don’t know when your asynchronous call will finish.
MQTT
Interaction with other Vehicle Apps or the cloud is enabled by using Mosquitto MQTT Broker. The MQTT broker runs inside a docker image, which is started automatically after starting the DevContainer.
In the quickstart section about the Vehicle App, you already tested sending MQTT messages to the app.
In the previous sections, you generally saw how to use Vehicle Models
, Datapoints
and GRPC Services
. In this section, you will learn how to combine them with MQTT.
In order to receive and process MQTT messages inside your app, simply use the VehicleApp::subscribeTopic
method provided by the SDK:
void onStart() override {
subscribeTopic("seatadjuster/setPosition/request")
->onItem([this](auto&& item){ onSetPositionRequestReceived(std::forward<decltype(item)>(item);)});
}
void onSetPositionRequestReceived(const std::string& data) {
const auto jsonData = nlohmann::json::parse(data);
const auto responseTopic = "seatadjuster/setPosition/response";
nlohmann::json respData({{"requestId", jsonData["requestId"]}, {"result", {}}});
}
The onSetPositionRequestReceived
method will now be invoked every time a message is created on the subscribed topic "seatadjuster/setPosition/response"
. The message data (string) is provided as parameter. In the example above the data is parsed to json (data = json.loads(data_str)
).
In order to publish data to other subscribers, the SDK provides the appropriate convenience method: VehicleApp::publishToTopic(...)
void MyVehicleApp::onSeatPositionChanged(const DataPointsResult& result):
const auto responseTopic = "seatadjuster/currentPosition";
nlohmann::json respData({"position": result.get(Vehicle.Cabin.Seat.Row(1).Pos(1).Position)->value()});
publishToTopic(
responseTopic,
respData.dump(),
);
The above example illustrates how one can easily publish messages. In this case, every time the seat position changes, the new position is published to seatadjuster/currentPosition
See the results
Once the implementation is done, it is time to run and debug the app.
Build your App
Before you can run the Vehicle App you need to build it first. To do so, simply run the provided build.sh
script found in the root of the SDK. It does accept some arguments, but that is out of scope for this tutorial.
Warning
If this is your first time building, you might have to runinstall_dependencies.sh
first.
Run your App
If you want to run the app together with a Dapr sidecar and use the Dapr middleware, you have to use the “dapr run …” command to start your app:
dapr run --app-id myvehicleapp --app-port 50008 --config ./.dapr/config.yaml --components-path ./.dapr/components build/bin/App
You already have seen this command and how to check if it is working in the general setup.
2 parameters may be unclear in this command:
- the config file
config.yaml
- the components-path
For now, you just need to know that these parameters are needed to make everything work together.
The config.yaml has to be placed in the folder called .dapr
and has the following content:
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: config
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: http://localhost:9411/api/v2/spans
features:
- name: proxy.grpc
enabled: true
An important part is the enabling of the GRPC proxy, to make the communication work.
Inside the .dapr
folder you find another folder called components
. There you only find one configuration file for the MQTT communication with the following content:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: mqtt-pubsub
namespace: default
spec:
type: pubsub.mqtt
version: v1
metadata:
- name: url
value: "mqtt://localhost:1883"
- name: qos
value: 1
- name: retain
value: "false"
- name: cleanSession
value: "false"
If you want to know more about dapr and the configuration, please visit the dapr documentation.
Debug your Vehicle App
In the introduction about debugging, you saw how to start a debugging session. In this section, you will learn what is happening in the background.
The debug session launch settings are already prepared for the VehicleApp
.
"configurations": [
{
"name": "VehicleApp - Debug (dapr)",
"type": "cppdbg",
"request": "launch",
"program": "${workspaceFolder}/build/bin/App",
"args": [],
"stopAtEntry": false,
"cwd": "${workspaceFolder}",
"environment": [
{
"name": "DAPR_GRPC_PORT",
"value": "50001"
},
{
"name": "DAPR_HTTP_PORT",
"value": "3500"
}
],
"externalConsole": false,
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable pretty-printing for gdb",
"text": "-enable-pretty-printing",
"ignoreFailures": true
},
{
"description": "Set Disassembly Flavor to Intel",
"text": "-gdb-set disassembly-flavor intel",
"ignoreFailures": true
}
],
"preLaunchTask": "dapr-VehicleApp-run",
"postDebugTask": "dapr-VehicleApp-stop"
}
]
We specify which binary to run using the program
key. With the preLaunchTask
and postDebugTask
keys, you can also specify tasks to run before or after debugging. In this example, DAPR is set up to start the app before and stop it again after debugging. Below you can see the 2 tasks.
{
"label": "dapr-VehicleApp-run",
"appId": "myvehicleapp",
"componentsPath": "./.dapr/components",
"config": "./.dapr/config.yaml",
"grpcPort": 50001,
"httpPort": 3500,
"type": "dapr",
"presentation": {
"close": true,
"reveal": "never"
},
}
{
"label": "dapr-VehicleApp-stop",
"type": "shell",
"command": [
"dapr stop --app-id myvehicleapp"
],
"presentation": {
"close": true,
"reveal": "never"
},
}
Lastly, the environment variables can also be specified.
You can adapt the JSON to your needs (e.g., change the ports, add new tasks) or even add a completely new configuration for another Vehicle App.
Once you are done, you have to switch to the debugging tab (sidebar on the left) and select your configuration using the dropdown on the top. You can now start the debug session by clicking the play button or F5. Debugging is now as simple as in every other IDE, just place your breakpoints and follow the flow of your Vehicle App.
Next steps
- Concept: SDK Overview
- Tutorial: Deploy runtime services in Kubernetes mode
- Tutorial: Start runtime services locally
- Tutorial: Creating a Vehicle Model
- Tutorial: Develop and run integration tests for a Vehicle App
- Concept: Deployment Model
- Tutorial: Deploy a Vehicle App with Helm
2.4 - Vehicle Model Creation
A Vehicle Model makes it possible to easily get vehicle data from the KUKSA Data Broker and to execute remote procedure calls over gRPC against Vehicle Services and other Vehicle Apps. It is generated from the underlying semantic models for a concrete programming language as a graph-based, strongly-typed, intellisense-enabled library.
This tutorial will show you how to:
- Create a Vehicle Model
- Add a Vehicle Service to the Vehicle Model
- Distribute your Python Vehicle Model
Note
A Vehicle Model should be defined in its own package. This makes it possible to distribute the Vehicle Model later as a standalone package and to use it in different Vehicle App projects.
The creation of a new vehicle model is only required if the vehicle signals (like sensors and actuators) defined in the current version of the COVESA Vehicle Signal Specification (VSS) is not sufficient for the definition of your vehicle API. Otherwise you could use the default vehicle model we geneated for you, see Python Vehicle Model and C++ Vehicle Model.
Prerequisites
- Visual Studio Code with the Python extension installed. For information on how to install extensions on Visual Studio Code, see VS Code Extension Marketplace.
Create a Vehicle Model from VSS specification
A Vehicle Model can be generated from a COVESA Vehicle Signal Specification (VSS). VSS introduces a domain taxonomy for vehicle signals, in the sense of classical attributes, sensors and actuators with the raw data communicated over vehicle buses and data. The Velocitas vehicle-model-generator creates a Vehicle Model from the given specification and generates a package for use in Vehicle App projects.
Follow the steps to generate a Vehicle Model.
-
Clone the vehicle-model-generator repository in a container volume.
-
In this container volume, clone the vehicle-signal-specification repository and if required checkout a particular branch:
git clone https://github.com/COVESA/vehicle_signal_specification cd vehicle_signal_specification git checkout <branch-name>
In case the VSS vspec doesn’t contain the required signals, you can create a vspec using the VSS Rule Set.
-
Execute the command
python3 gen_vehicle_model.py -I ./vehicle_signal_specification/spec ./vehicle_signal_specification/spec/VehicleSignalSpecification.vspec -l <lang> -T sdv_model -N sdv_model
Depending on the value of
lang
, which can assume the valuespython
andcpp
, this creates asdv_model
directory in the root of repository along with all generated source files for the given programming language.Here is an overview of what is generated for every available value of
lang
:lang output python
python sources and a setup.py
ready to be used as python packagecpp
c++ sources, headers and a CMakeLists.txt ready to be used as a CMake project To have a custom model name, refer to README of vehicle-model-generator repository.
-
For python: Change the version of package in
setup.py
manually (defaults to 0.1.0). -
Now the newly generated
sdv_model
can be used for distribution. (See Distributing your Vehicle Model)
Create a Vehicle Model Manually
Alternative to the generation from a VSS specification you could create the Vehicle Model manually. The following sections describing the required steps.
Distributing your Vehicle Model
Once you have created your Vehicle Model either manually or via the Vehicle Model Generator, you need to distribute your model to use it in an application. Follow the links below for language specific tutorials on how to distribute your freshly created Vehicle Model.
Further information
- Concept: SDK Overview
- Tutorial: Setup and Explore Development Enviroment
- Tutorial: Create a Vehicle App
2.4.1 - C++ Manual Vehicle Model Creation
Not yet done for C++
2.4.2 - Python Manual Vehicle Model Creation
Setup a Python Package manually
A Vehicle Model should be defined in its own Python Package. This allows to distribute the Vehicle Model later as a standalone package and to use it in different Vehicle App projects.
The name of the Vehicle Model package will be my_vehicle_model
for this walkthrough.
-
Start Visual Studio Code
-
Select File > Open Folder (File > Open… on macOS) from the main menu.
-
In the Open Folder dialog, create a
my_vehicle_model
folder and select it. Then click Select Folder (Open on macOS). -
Create a new file
setup.py
undermy_vehicle_model
:from setuptools import setup setup(name='my_vehicle_model', version='0.1', description='My Vehicle Model', packages=['my_vehicle_model'], zip_safe=False)
This is the Python package distribution script.
-
Create an empty folder
my_vehicle_model
undermy_vehicle_model
. -
Create a new file
__init__.py
undermy_vehicle_model/my_vehicle_model
.
At this point the source tree of the Python package should look like this:
my_vehicle_model
├── my_vehicle_model
│ └── __init__.py
└── setup.py
To verify that the package is created correctly, install it locally:
pip3 install .
The output of the above command should look like this:
Defaulting to user installation because normal site-packages is not writeable
Processing /home/user/projects/my-vehicle-model
Preparing metadata (setup.py) ... done
Building wheels for collected packages: my-vehicle-model
Building wheel for my-vehicle-model (setup.py) ... done
Created wheel for my-vehicle-model: filename=my_vehicle_model-0.1-py3-none-any.whl size=1238 sha256=a619bc9fbea21d587f9f0b1c1c1134ca07e1d9d1fdc1a451da93d918723ce2a2
Stored in directory: /home/user/.cache/pip/wheels/95/c8/a8/80545fb4ff73c974ac1716a7bff6f7f753f92022c41c2e376f
Successfully built my-vehicle-model
Installing collected packages: my-vehicle-model
Successfully installed my-vehicle-model-0.1
Finally, uninstall the package again:
pip3 uninstall my_vehicle_model
Add Vehicle Models manually
-
Install the Python Vehicle App SDK:
pip3 install git+https://github.com/eclipse-velocitas/vehicle-app-python-sdk.git
The output of the above command should end with:
Successfully installed sdv-x.y.z
Now it is time to add some Vehicle Models to the Python package. At the end of this section you will have a Vehicle Model, that contains a
Cabin
model, aSeat
model and has the following tree structure:Vehicle └── Cabin └── Seat (Row, Pos)
-
Create a new file
Seat.py
undermy_vehicle_model/my_vehicle_model
:from sdv.model import Model class Seat(Model): def __init__(self, parent): super().__init__(parent) self.Position = DataPointFloat("Position", self)
This creates the Seat model with a single data point of type
float
namedPosition
. -
Create a new file
Cabin.py
undermy_vehicle_model/my_vehicle_model
:from sdv.model import Model class Cabin(Model): def __init__(self, parent): super().__init__(parent) self.Seat = SeatCollection("Seat", self) class SeatCollection(Model): def __init__(self, name, parent): super().__init__(parent) self.name = name self.Row1 = self.RowType("Row1", self) self.Row2 = self.RowType("Row2", self) def Row(self, index: int): if index < 1 or index > 2: raise IndexError(f"Index {index} is out of range") _options = { 1 : self.Row1, 2 : self.Row2, } return _options.get(index) class RowType(Model): def __init__(self, name, parent): super().__init__(parent) self.name = name self.Pos1 = Seat("Pos1", self) self.Pos2 = Seat("Pos2", self) self.Pos3 = Seat("Pos3", self) def Pos(self, index: int): if index < 1 or index > 3: raise IndexError(f"Index {index} is out of range") _options = { 1 : self.Pos1, 2 : self.Pos2, 3 : self.Pos3, } return _options.get(index)
This creates the
Cabin
model, which contains a set of sixSeat
models, referenced by their names or by rows and positions:- row=1, pos=1
- row=1, pos=2
- row=1, pos=3
- row=2, pos=1
- row=2, pos=2
- row=2, pos=3
-
Create a new file
vehicle.py
undermy_vehicle_model/my_vehicle_model
:from sdv.model import Model from my_vehicle_model.Cabin import Cabin class Vehicle(Model): """Vehicle model""" def __init__(self, name): super().__init__() self.name = name self.Speed = DataPointFloat("Speed", self) self.Cabin = Cabin("Cabin", self) vehicle = Vehicle("Vehicle")
The root model of the Vehicle Model tree should be called Vehicle by convention and is specified, by setting parent to None
. For all other models a parent model must be specified as the 2nd argument of the Model
constructor, as can be seen by the Cabin
and the Seat
models above.
A singleton instance of the Vehicle Model called vehicle
is created at the end of the file. This instance is supposed to be used in the Vehicle Apps. Creating multiple instances of the Vehicle Model should be avoided for performance reasons.
Add a Vehicle Service
Vehicle Services provide service interfaces to control actuators or to trigger (complex) actions. E.g. they communicate with the vehicle internals networks like CAN or Ethernet, which are connected to actuators, electronic control units (ECUs) and other vehicle computers (VCs). They may provide a simulation mode to run without a network interface. Vehicle Services may feed data to the Data Broker and may expose gRPC endpoints, which can be invoked by Vehicle Apps over a Vehicle Model.
In this section, we add a Vehicle Service to the Vehicle Model.
-
Create a new folder
proto
undermy_vehicle_model/my_vehicle_model
. -
Copy your proto file under
my_vehicle_model/my_vehicle_model/proto
As example you could use the protocol buffers message definition seats.proto provided by the KUKSA VAL services which describes a seat control service.
-
Install the grpcio tools including mypy types to generate the python classes out of the proto-file:
pip3 install grpcio-tools mypy_protobuf
-
Generate Python classes from the
SeatService
message definition:python3 -m grpc_tools.protoc -I my_vehicle_model/proto --grpc_python_out=./my_vehicle_model/proto --python_out=./my_vehicle_model/proto --mypy_out=./my_vehicle_model/proto my_vehicle_model/proto/seats.proto
This creates the following grpc files under the
proto
folder:- seats_pb2.py
- seats_pb2_grpc.py
- seats_pb2.pyi
-
Create the
SeatService
class and wrap the gRPC service:from sdv.model import Service from my_vehicle_model.proto.seats_pb2 import ( CurrentPositionRequest, MoveComponentRequest, MoveRequest, Seat, SeatComponent, SeatLocation, ) from my_vehicle_model.proto.seats_pb2_grpc import SeatsStub class SeatService(Service): "SeatService model" def __init__(self): super().__init__() self._stub = SeatsStub(self.channel) async def Move(self, seat: Seat): response = await self._stub.Move(MoveRequest(seat=seat), metadata=self.metadata) return response async def MoveComponent( self, seatLocation: SeatLocation, component: SeatComponent, position: int, ): response = await self._stub.MoveComponent( MoveComponentRequest( seat=seatLocation, component=component, # type: ignore position=position, ), metadata=self.metadata, ) return response async def CurrentPosition(self, row: int, index: int): response = await self._stub.CurrentPosition( CurrentPositionRequest(row=row, index=index), metadata=self.metadata, ) return response
Some important remarks about the wrapping
SeatService
class shown above:- The
SeatService
class must derive from theService
class provided by the Python SDK. - The
SeatService
class must use the grpc channel from theService
base class and provide it to the_stub
in the__init__
method. This allows the SDK to manage the physical connection to the grpc service and use service discovery of the middleware. - Every method needs to pass the metadata from the
Service
base class to the gRPC call. This is done by passing theself.metadata
argument to the metadata of the gRPC call.
- The
2.4.3 - Vehicle Model Distribution
2.4.3.1 - C++ Vehicle Model Distribution
Now that you have created your own Vehicle Model, we can distribute it to make use of it in Vehicle Apps.
Copying the folder to your Vehicle App repo
The easiest way to get started quickly is to copy the created model, presumably stored in vehicle_model
into your Vehicle App repository to use it. To do so, simply copy and paste the directory into the <sdk_root>/app
directory and replace the existing model.
Using a git submodule
A similar approach to the one above but a bit more difficult to set up is to create a git repository for the created model. The advantage of this approach is that you can share the same model between multiple Vehicle Apps without any manual effort.
- Create a new git repository on i.e. Github
- Clone it locally, add the created
vehicle_model
folder to the git repository - Commit everything and push the branch
In your Vehicle App repo, add a new git submodule via
git submodule add <checkout URL of your new repo> app/vehicle_model
git submodule init
Now you are ready to develop new Vehicle Apps with your custom Vehicle Model!
2.4.3.2 - Python Vehicle Model Distribution
Now you a have a Python package containing your first Python Vehicle Model and it is time to distribute it. There is nothing special about the distribution of this package, since it is just an ordinary Python package. Check out the Python Packaging User Guide to learn more about packaging and package distribution in Python.
Distribute to single Vehicle App
If you want to distribute your Python Vehicle Model to a single Vehicle App, you can do so by copying the entire folder my_vehicle_model
under the /app/src
folder of your Vehicle App repository and treat it as a sub-package of the Vehicle App.
- Create a new folder
my_vehicle_model
under/app/src
in your Vehicle App repository. - Copy the
my_vehicle_model
folder to the/app/src
folder of your Vehicle App repository. - Import the package
my_vehicle_model
in your Vehicle App:
from <my_app>.my_vehicle_model import vehicle
...
my_app = MyVehicleApp(vehicle)
Distribute inside an organization
If you want to distribute your Python Vehicle Model inside an organization and use it to develop multiple Vehicle Apps, you can do so by creating a dedicated Git repository and copying the files there.
-
Create new Git repository called
my_vehicle_model
-
Copy the content under
my_vehicle_model
to the repository. -
Release the Vehicle Model by creating a version tag (e.g.,
v1.0.0
). -
Install the Vehicle Model package to your Vehicle App:
pip3 install git+https://github.com/<yourorg>/my_vehicle_model.git@v1.0.0
-
Import the package
my_vehicle_model
in your Vehicle App and use it as shown in the previous section.
Distribute publicly as open source
If you want to distribute your Python Vehicle Model publicly, you can do so by creating a Python package and distributing it on the Python Package Index (PyPI). PyPi is a repository of software for the Python programming language and helps you find and install software developed and shared by the Python community. If you use the pip
command, you are already using PyPI.
Detailed instructions on how to make a Python package available on PyPI can be found here.
2.5 - Run Vehicle App Runtime Services
2.5.1 - Run runtime services locally
Using tasks in Visual Studio Code
Overview: If you are developing in Visual Studio Code, the runtime components (like KUKSA Data Broker or Vehicle Services) are available for local execution as Tasks, a feature of the Visual Studio Code. Additional information on tasks can be found here.
Quick Start: Each component has a task that is defined in .vscode/tasks.json:
- Dapr (
Local - Ensure Dapr
): installs Dapr CLI and initializes Dapr if required - Mosquitto (
Local - Mosquitto
): runs Mosquitto as a container (docker run
) - KUKSA Data Broker (
Local - VehicleDataBroker
): runs KUKSA Data Broker as a container - (Optional) Vehicle Services (
Local - VehicleServices
): runs the Vehicle Services (e.g. theSeat Service
) configured in theAppManifest.json
each as a separate container - (Optional) Feeder Can (
Local - FeederCan
): runs FeederCAN as a container
Run as Bundle: To orchestrate these tasks, a task called Start Vehicle App runtime
is available. This task runs the other tasks in the correct order. You can run this task by clicking F1
and choose Tasks: Run task
, then select Start Vehicle App runtime
.
Tasks Management: Visual Studio Code offers various other commands concerning tasks like Start/Terminate/Restart/… You can access them by pressing F1 and typing task
. A list with available task commands will appear.
Logging: Running tasks appear in the Terminals View of Visual Studio Code. From there, you can see the logs of each running task.
Scripting: The tasks itself are executing scripts that are located in .vscode/scripts
. These scripts download the specified version of the runtime components and execute them along with Dapr. The same mechanism can be used to register additional services or prerequisites by adding new task definitions in the tasks.json
file.
Add/Change service configuration
The configuration for the services is defined in the file ./AppManifest.json
. If you want to add a new service, adapt ./AppManifest.json
. If you want to update the version, change it within the file and re-run the runtime services by restarting the tasks or the script.
Add/Change service configuration helper
{
"name": "<NAME>",
"image": "<IMAGE>",
"version": "<VERSION>"
}
Using KUKSA Data Broker CLI
A CLI tool is provided for the interact with a running instance of the KUKSA Data Broker. It can be started by running the task Local - VehicleDataBroker CLI
(by pressing F1, type Run Task followed by Local - VehicleDataBroker CLI
). The KUKSA Data Broker needs to be running for you to be able to use the tool.
Using KUKSA FeederCan
FeederCan is a provider of a certain set of data points to the data broker.
To run FeederCan as task please use [F1 -> Tasks: Run Task -> Local - FeederCan]
and it will be run as a docker container.
By default it will use the same file, that is used for the k3d environment: deploy/runtime/k3d/volume/dbcfileDefault.dbc
For more flexible configuration please follow CAN feeder (KUKSA DBC Feeder)
Integrating a new runtime service into Visual Studio Code Task
Integration of a new runtime service can be done by duplicating one of the existing tasks.
- Create a new script based on template script
.vscode/scripts/run-vehicledatabroker.sh
- In
.vscode/tasks.json
, duplicate section from taskrun-vehicledatabroker
- Correct names in a new code block
- Disclaimer:
Problem Matcher
defined intasks.json
is a feature of the Visual Studio Code Task, to ensure that the process runs in background - Run task using
[F1 -> Tasks: Run Task -> <Your new task name>]
- Task should be visible in Terminal section of Visual Studio Code
Task CodeBlock helper
{
"label": "<__CHANGEIT: Task name__>",
"type": "shell",
"command": "./.vscode/scripts/<__CHANGEIT: Script Name.sh__> --task",
"group": "none",
"presentation": {
"reveal": "always",
"panel": "dedicated"
},
"isBackground": true,
"problemMatcher": {
"pattern": [
{
"regexp": ".",
"file": 1,
"location": 2,
"message": 3
}
],
"background": {
"activeOnStart": true,
"beginsPattern": "^<__CHANGEIT: Regex log from your app, decision to send process in background__>",
"endsPattern": "."
}
}
},
Integrating a new vehicle service
Integration of a new vehicle service can be done by adding an additional case and following the template run-vehicleservices.sh
.
Vehicle Service CodeBlock helper
# Configure Service Specific Requirements
configure_service() {
case $1 in
seatservice)
...
;;
<NEW_SERVICE>)
# Configure ports for docker to expose
DOCKER_PORTS="-p <PORT_TO_EXPOSE>"
# Configure ENVs need to run docker container
DOCKER_ENVS="-e <ENV_TO_RUN_CONTAINER>"
# Configure Dapr App Port
DAPR_APP_PORT=
# Configure Dapr Grpc Port
DAPR_GRPC_PORT=
;;
*)
echo "Unknown Service to configure."
;;
esac
}
Troubleshooting
Problem description: When integrating new services into an existing dev environment, it is highly recommended to use the Visual Studio Code Task Feature. A new service can be easily started by calling it from bash script, however restarting the same service might lead to port conflicts (GRPC Port or APP port). That can be easily avoided by using the Visual Studio Code Task Feature.
Codespaces
If you are using Codespaces, remember that you are working on a remote agent. That’s why it could happen that the tasks are already running in the background. If that’s the case a new start of the tasks will fail, since the ports are already in use. In the Dapr-tab of the sidebar you can check if there are already tasks running. Another possibility to check if the processes are already running, is to check which ports are already open. Check the Ports-tab to view all open ports (if not already open, hit F1
and enter View: Toggle Ports
).
Next steps
- Tutorial: Deploy runtime services in local Kubernetes cluster
- Tutorial: Setup and Explore Development Enviroment
- Concept: Deployment Model
- Concept: Build and release process
- Tutorial: Deploy a Python Vehicle App with Helm
2.5.2 - Run runtime services in Kubernetes
Besides local execution of the vehicle runtime components, another way is to deploy them as containers in a Kubernetes control plane (like K3D). To create a K3D instance, we provide Visual Studio Code Tasks, a feature of Visual Studio Code. Additional information on tasks can be found here.
Quick Start: Each step has a task that is defined in .vscode/tasks.json:
- Core tasks (dependent on each other in the given order):
K3D - Install prerequisites
: Install prerequisite components K3D, Helm, KubeCTL and Dapr without configuring them.K3D - Configure control plane
: Creates a local container registry used by K3D as well as an K3D cluster with Dapr enabled.K3D - Deploy runtime
: Deploys the runtime components (like KUKSA Data Broker, Seat Service, …) within the K3D cluster.K3D - Build VehicleApp
: Builds the VehicleApp and pushes it to the local K3D registry.K3D - Deploy VehicleApp
: Builds and deploys the VehicleApp via Helm to the K3D cluster.
Each task has the required dependencies defined. If you want to run the runtime in K3D, the task K3D - Deploy VehicleApp
will create and configure everything. So it’s enough to run that task.
- Optional helper tasks:
K3D - Deploy VehicleApp (without rebuild)
: Deploys the VehicleApp via Helm to the K3D cluster (without rebuilding it). That requires, that the taskK3D - Build VehicleApp
has been executed once before.K3D - Install tooling
: Installs tooling for local debugging (K9s)K3D - Uninstall runtime
: Uninstalls the runtime components from the K3D cluster (without deleting the cluster).K3D - Reset control plane
: Deletes the K3D cluster and the registry with all deployed pods/services.
K3D is configured so that Mosquitto and the KUKSA Data Broker can be reached from outside the container over the ports 31883
(Mosquitto) and 30555
(KUKSA Data Broker). The test runner, that is running outside of the cluster, can interact with these services over those ports.
To check the status of your K3D instance (running pods, containers, logs, …) you can either use the kubectl
utility or start K9s (after running the task K3D - Install tooling
once) in a terminal window to have a UI for interacting with the cluster.
Run as Bundle: To orchestrate these tasks, a task called K3D - Deploy VehicleApp
is available. This task runs the other tasks in the correct order. You can run this task by clicking F1
and choose Tasks: Run task
, then select K3D - Deploy VehicleApp
.
Tasks Management: Visual Studio Code offers various other commands concerning tasks like Start/Terminate/Restart/… You can access them by pressing F1 and typing task
. A list with available task commands will appear.
Logging: Running tasks appear in the Terminals View of Visual Studio Code. From there, you can see the logs of each running task.
Uploading files to persistentVolume
Some applications (e.g. FeederCAN) might make it necessary to load custom files from mounted volume. For that reason, persistentVolume is created in k3d cluster.
All the files that are located in deploy/runtime/k3d/volume
will be uploaded to the k3d cluster dynamically. In order to mount files to the directory that is accessible by the application, please refer to the deployment configuration file: deploy/runtime/k3d/helm/templates/bash.yaml
.
Changes in deploy/runtime/k3d/volume
are automatically reflected in PersistentVolume.
Uploading custom candump file to FeederCAN
FeederCAN requires candump file. Pre-defined candump file is part of docker container release, however, if necessary, there is an option to upload the custom file by:
- Creating/updating candump file with with name
candump
indeploy/runtime/k3d/volume
- Recreating the feedercan pod:
kubectl delete pods -l app=feedercan
More information about FeederCan can be found here
Next steps
- Tutorial: Start runtime services locally
- Tutorial: Setup and Explore Development Enviroment
- Concept: Deployment Model
- Concept: Build and release process
- Tutorial: Deploy a Python Vehicle App with Helm
2.6 - Vehicle App Integration Testing
To be sure that a newly created Vehicle App will run together with the KUKSA Data Broker and potentially other dependant Vehicle Services or Vehicle Apps, it’s essential to write integration tests along with developing the app.
To execute an integration test, the dependant components need to be running and accessible from the test runner. This guide will describe how integration tests can be written and integrated in the CI pipeline so that they are executed automatically when building the application.
Quickstart
- Make sure that the local execution of runtime components is working and started.
- Start the application (Debugger or run as task).
- Extend the test file
/app/tests/integration/integration_test.py
or create a new test file. - Run/debug tests with the Visual Studio Code Test runner.
Runtime components
To be able to test the Vehicle App in an integrated way, the following components should be running:
- Dapr
- Mosquitto
- Data Broker
- Vehicle Services
We distinguish between two environments for executing the Vehicle App and the runtime components:
- Local execution: components are running locally in the development environment
- Kubernetes execution: components (and application) are deployed and running in a Kubernetes control plane (e.g., K3D)
Local Execution
First, make sure that the runtime services are configured and running like described here.
The application itself can be executed by using a Visual Studio Launch Config (by pressing F5) or by executing the task VehicleApp
.
When the runtime services and the application are running, integration tests can be executed locally.
Kubernetes execution (K3D)
If you want to execute the integration tests in Kubernetes mode, make sure that K3D is up and running according to the documentation. To ensure that the tests connect to the containers, please execute the following steps in new bash terminal:
export MQTT_PORT=31883 && export VDB_PORT=30555 && pytest
Writing Test Cases
To write an integration test, you should check the sample that comes with the template (/app/tests/integration/integration_test.py
). To support interacting with the MQTT broker and the KUKSA Data Broker (to get and set values for DataPoints), there are two classes present in Python SDK that will help:
-
MqttClient
: this class provides methods for interacting with the MQTT broker. Currently, the following methods are available:-
publish_and_wait_for_response
: publishes the specified payload to the given request topic and waits (till timeout) for a message to the response topic. The payload of the first message that arrives in the response topic will be returned. If the timeout expires before, an empty string ("") is returned. -
publish_and_wait_for_property
: publishes the specified payload to the given request topic and waits (till timeout) until the given property value is found in an incoming message to the response topic. Thepath
describes the property location within the response message, thevalue
the property value to look for.Example:
{ "status": "success", "result": { "responsecode": 10 } }
If the
responsecode
property should be checked for the value10
, the path would be["result", "responsecode]
, property value would be10
. When the requested value has been found in a response message, the payload of that message will be returned. If the timeout expires before receiving a matching message, an empty string ("") is returned.
This class can be initialized with a given port. If no port is specified, the environment variable
MQTT_PORT
will be checked. If this is not possible either, the default value of1883
will be used. It’s recommended to specify no port when initializing that class as it will locally use the default port1883
and in CI the port set by the environment variableMQTT_PORT
. This will prevent a check-in in the wrong port from local development. -
-
IntTestHelper
: this class provides functionality to interact with the KUKSA Data Broker.register_dapoint
: registers a new datapoint with given name and typeset_..._datapoint
: set the given value for the datapoint with the given name (with given type). If the datapoint does not exist, it will be registered.
This class can be initialized with a given port. If no port is specified, the environment variable
VDB_PORT
will be checked. If this is not possible either, the default value of55555
will be used. It’s recommended to specify no port when initializing that class as it will locally use the default port55555
and in CI the port set by the environment variableVDB_PORT
which is set. This will prevent a check-in in the wrong port from local development.
Please make sure that you don’t check in the test classes with using local ports because then the execution in the CI workflow will fail (as the CI workflow uses Kubernetes execution for running integration tests).
Running Tests locally
Once tests are developed, they can be executed against the running runtime components, either to the local runtime or in Kubernetes mode, by using the test runner in Visual Studio Code. The switch to run against the local components or the Kubernetes components is specified by the port. Local ports for Mosquitto and KUKSA Data Broker are 1883
/55555
. In Kubernetes mode, the ports would be the locally exposed ports 31883
/30555
. If using the Kubernetes ports, the tests will be executed against the runtime components/application that run in containers within the Kubernetes cluster.
Running Tests in CI pipeline
The tests will be discovered and executed automatically in the CI pipeline. The job Run Integration Tests
contains all steps to set up and execute tests in Kubernetes mode. The results are published as test results to the workflow.
Common Tasks
Run test in local mode
- Make sure that the tasks for the runtime components are running (by checking the terminal view).
- Make sure that your application is running (via Debugger or task).
- Make sure that you are using the right ports for local execution of runtime components.
- Run tests from the test runner.
Run tests in Kubernetes mode
- Make sure that K3D is set up and all vehicle services and vehicle runtime are deployed and running (by executing the task
K3D - Deploy runtime
). - Make sure that the tests are using the right ports for Kubernetes execution (see above).
- Run tests from the test runner.
Update application when running in Kubernetes mode
- Re-run the task
K3D - Deploy runtime
that rebuilds and deploys the application to K3D. - Re-run tests from the test runner.
Troubleshooting
Check if the services are registered correctly in Dapr
- When running in local mode, call
dapr dashboard
in a terminal and open the given URL to see the Dapr dashboard in the browser. - When running in Kubernetes mode, call
dapr dashboard -k
in a terminal and open the given URL to see the Dapr dashboard in the browser.
Troubleshoot IntTestHelper
- Make sure that the KUKSA Data Broker is up and running by checking the task log.
- Make sure that you are using the right ports for local/Kubernetes execution.
- Make sure that you installed the correct version of the SDK (SDV-package).
Troubleshoot Mosquitto (MQTT Broker)
- Make sure that the Mosquitto up and running by checking the task log.
- Make sure that you are using the right ports for local/Kubernetes execution.
- Use VsMqtt extension to connect to MQTT broker (
localhost:1883
(local) orlocalhost:31883
(Kubernetes)) to monitor topics in MQTT broker.
Next steps
- Concept: Deployment Model
- Concept: Build and release process
- Tutorial: Deploy a Python Vehicle App with Helm
2.7 - Vehicle App Deployment via PodSpecs
This tutorial will show you how to:
- Prepare PodSpecs
- Deploy your Vehicle App to local K3D
Prerequisites
- Visual Studio Code with the Python extension installed. For information on how to install extensions on Visual Studio Code, see VS Code Extension Marketplace.
- Completed the tutorial How to create a vehicle app
Use the sample PodSpecs
If the Vehicle App has been created from one of our template repositories, a sample PodSpec is already available under deploy/VehicleApp/podspec
and can be used as it is without any modification. Another example can also be found in the documentation of Leda.
Content
Looking at the content of the sample-podspec, it is starting with some general information about the app and the dapr configuration. You can define e.g. the app-port and the log-level. You could also add more labels to your app, which might help to identify the app for later usages.
apiVersion: v1
kind: Pod
metadata:
name: sampleapp
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: sampleapp
dapr.io/app-port: "50008"
dapr.io/app-protocol: grpc
dapr.io/log-level: info
labels:
app: sampleapp
Afterwards the configuration of the container is specified. Please be aware that the container-port should match the app-port from the dapr-config above. In the example the app-id of the VehicleDataBroker is also specified, since the app wants to connect to it. Last but not least the image is defined which should be used for the deployment. In this example the local registry is used, which is created during the configuration of the controlplane (see here for details).
spec:
containers:
- name: sampleapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 50008
env:
- name: VEHICLEDATABROKER_DAPR_APP_ID
value: "vehicledatabroker"
image: k3d-registry.localhost:12345/sampleapp:local
Note
Please make sure that you already pushed your image to the local registry before trying to deploy it. If you used the provided tasks (see here for details), you can use the following commands:docker tag localhost:12345/sampleapp:local k3d-registry.localhost:12345/sampleapp:local
docker push k3d-registry.localhost:12345/sampleapp:local
Local registry or remote registry
In the example above we used the local registry, but you can also define a remote registry in the image tag, e.g.
image: ghcr.io/eclipse-velocitas/vehicle-app-python-template/sampleapp:0.0.1-bcx
If your registry is not public, you might to add secrets to your Kubernets config, see the official documentation for details. Afterwards you have to add the secrets also to the PodSpec:
imagePullSecrets:
- name: regcred
Deploy your Vehicle App to local K3D
Prerequisites
- A local K3D installation must be available. For how to setup K3D, check out this tutorial.
Deploying your app with PodSpecs can be done with one simple command:
kubectl apply -f <podspec.yaml>
In parallel you can check with K9S if the deployment is working correctly.
Next steps
- Tutorial: Start runtime services locally
- Concept: Build and release process
2.8 - Vehicle App Deployment with Helm
This tutorial will show you how to:
- Prepare a Helm chart
- Deploy your Vehicle App to local K3D
Prerequisites
- Visual Studio Code with the Python extension installed. For information on how to install extensions on Visual Studio Code, see VS Code Extension Marketplace.
- Completed the tutorial How to create a vehicle app
Use the sample Helm chart
If the Vehicle App has been created from one of our template repositories, a sample Helm chart is already available under deploy/VehicleApp/helm
and can be used as it is without any modification.
This sample chart is using the values from deploy/VehicleApp/helm/values.yaml
file, during the deployment of the VehicleApp, the neccessary app attributes from the AppManifest.json
(e.g. app name
and app port
) will overwite the default values from the sample helm chart via the .vscode/runtime/k3d/deploy_vehicleapp.sh
script.
Prepare a new Helm chart
If you would like to write a new helm chart, this section will guide you to adapt and deploy a new vehicle app, which is called my_vehicle_app
for this walkthrough.
-
Start Visual Studio Code and open the previously created Vehicle App repository.
-
Create a new folder
my_vehicle_app
underdeploy
-
Copy all files from the
deploy/VehicleApp
folder to the new folderdeploy/my_vehicle_app
. -
Rename the file
deploy/my_vehicle_app/helm/templates/vehicleapp.yaml
todeploy/my_vehicle_app/helm/templates/my_vehicle_app.yaml
-
Open
deploy/my_vehicle_app/helm/Chart.yaml
and change the name of the chart tomy_vehicle_app
and provide a meaningful description.apiVersion: v2 name: my_vehicle_app description: Short description for my_vehicle_app # A chart can be either an 'application' or a 'library' chart. # # Application charts are a collection of templates that can be packaged into versioned archives # to be deployed. # # Library charts provide useful utilities or functions for the chart developer. They're included as # a dependency of application charts to inject those utilities and functions into the rendering # pipeline. Library charts do not define any templates and cannot be deployed as a result. type: application # This is the chart version. This version number should be incremented each time you make changes # to the chart and its templates, including the app version. # Versions are expected to follow Semantic Versioning (https://semver.org/) version: 0.1.0 # This is the version number of the application being deployed. This version number should be # incremented each time you make changes to the application. Versions are not expected to # follow Semantic Versioning. They should reflect the version the application is using. appVersion: 1.16.0
-
Open
deploy/my_vehicle_app/helm/values.yaml
and changename
,repository
anddaprAppid
tomy_vehicle_app
. Rename the root node fromimageVehicleApp
toimageMyVehicleApp
.imageMyVehicleApp: name: my_vehicle_app repository: local/my_vehicle_app pullPolicy: Always # Overrides the image tag whose default is the chart appVersion. tag: "#SuccessfulExecutionOfReleaseWorkflowUpdatesThisValueToReleaseVersionWithoutV#" daprAppid: my_vehicle_app daprPort: 50008 nameOverride: "" fullnameOverride: ""
-
Open
deploy/my_vehicle_app/helm/templates/my_vehicle_app.yaml
and replaceimageVehicleApp
withimageMyVehicleApp
:apiVersion: apps/v1 kind: Deployment metadata: name: {{.Values.imageMyVehicleApp.name}} labels: app: {{.Values.imageMyVehicleApp.name}} spec: selector: matchLabels: app: {{.Values.imageMyVehicleApp.name}} template: metadata: annotations: dapr.io/enabled: "true" dapr.io/app-id: "{{.Values.imageMyVehicleApp.daprAppid}}" dapr.io/app-port: "{{.Values.imageMyVehicleApp.daprPort}}" dapr.io/log-level: "debug" dapr.io/config: "config" dapr.io/app-protocol: "grpc" labels: app: {{.Values.imageMyVehicleApp.name}} {{- include "helm.selectorLabels" . | nindent 8 }} spec: containers: - name: {{.Values.imageMyVehicleApp.name}} image: "{{ .Values.imageMyVehicleApp.repository }}:{{ .Values.imageMyVehicleApp.tag | default .Chart.AppVersion }}" imagePullPolicy: {{ .Values.imageMyVehicleApp.pullPolicy }}
-
update or copy the scripts
build_vehicleapp.sh
anddeploy_vehicleapp.sh
in path (.vscode/scripts/runtime/k3d/
) for the local Kubernates deployment and adjust the values according to the valuesAppManifest.json
:- APP_NAME
- APP_PORT
- DOCKERFILE_FILE
-
update the script
.github/scripts/deploy_imagefromghcr.sh
for the CI workflow with the correct values from theAppManifest.json
as above.
At this point, the Helm chart and updated scripts are ready to use and folder structure under deploy/my_vehicle_app
should look like this:
deploy
├── my_vehicle_app
│ └── helm
│ └── templates
│ └── _helpers.tpl
│ └── my_vehicle_app.yaml
│────────── .helmignore
│────────── Chart.yaml
└────────── values.yaml
Deploy your Vehicle App to local K3D
Prerequisites
- A local K3D installation must be available. For how to setup K3D, check out this tutorial.
After the Helm chart has been prepared, you can deploy it to local K3D. Execute the script:
deploy/my_vehicle_app/deploy-my-vehicle-app.sh
This script builds the local source code of the application into a container, pushes that container to the local cluster registry and deploys the app via a helm chart to the K3D cluster. Rerun this script after you have changed the source code of your application to re-deploy with the latest changes.
Next steps
- Tutorial: Start runtime services locally
- Concept: Build and release process
3 - Contribution Guidelines
Thanks for thinking about contributing to Eclipse Velocitas. We really appreciate the time and effort you want to spend helping to improve Eclipse Velocitas.
However, in order to get you started as fast as possible, we need to go through some organizational issues first.
Eclipse Contributor Agreement
Before your contribution can be accepted by the project team, contributors must electronically sign the Eclipse Contributor Agreement (ECA).
Commits that are provided by non-committers must have a Signed-off-by field in the footer indicating that the author is aware of the terms by which the contribution has been provided to the project. The non-committer must additionally have an Eclipse Foundation account and must have a signed Eclipse Contributor Agreement (ECA) on file.
For more information, please see the Eclipse Committer Handbook: https://www.eclipse.org/projects/handbook/#resources-commit
Making Your Changes
- Fork the repository on GitHub.
- Create a new branch for your changes.
- Make your changes following the code style guide (see Code Style Guide section above).
- When you create new files, make sure you include a proper license header at the top of the file (see License Header section below).
- Make sure you include test cases for non-trivial features.
- Make sure test cases provide sufficient code coverage (see GitHub actions for minimal accepted coverage).
- Make sure the test suite passes after your changes.
- Commit your changes into that branch.
- Use descriptive and meaningful commit messages. Start the first line of the commit message with the issue number and title e.g.,
[#9865] Add token-based authentication
. - Squash multiple commits that are related to each other semantically into a single one.
- Make sure you use the
-s
flag when committing as explained above. - Push your changes to your branch in your forked repository.
Adding Documentation to Hugo
- Add the markdown document to the appropriate folder in the path velocitas-docs/hugo/hugo/content.
- Add the front-matter
---
title: "title of the file"
date: 2022-05-09T13:43:25+05:30
---
- Additional front matter that can be added –
- url : "specifying a definite url to the file"
- weight : 10 (used for ordering your content in lists. Lower weight gets higher precedence.)
- The images need to be put in path velocitas-docs/hugo/hugo/static/assests. The image reference should be /assests/image.jpg in the markdown file. (Note: Do not use relative paths or url)
- In case you are creating a new folder, create _index.md file with the front matter only.
Running Locally
- Install hugo version 0.98.0 extended Release v0.98.0 · gohugoio/hugo (github.com)
- Install Docsy theme in the path velocitas-docs/hugo/hugo/theme –
#Run this command from the root directory of velocitas-docs
git clone https://github.com/google/docsy.git hugo/hugo/themes/docsy
- Install pre-requisites
cd themes/docsy/userguide/
npm install
npm install --save-dev postcss
- Run the command hugo server visit localhost:1313 from the velocitas-docs/hugo/hugo directory to see the rendered static site.
Submitting the Changes
Submit a pull request via the normal GitHub UI.
After Submitting
- Do not use your branch for any other development, otherwise further changes that you make will be visible in the PR.
License Header
Please make sure any file you newly create contains a proper license header like this:
# Copyright (c) <year> Contributors to the Eclipse Foundation
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Apache License 2.0 which is available at
# http://www.apache.org/licenses/LICENSE-2.0
#
# SPDX-License-Identifier: Apache-2.0
You should, of course, adapt this header to use the specific mechanism for comments pertaining to the type of file you create.
Important
Please do not forget to add your name/organization to the /legal/legal/NOTICE.md
file’s Copyright Holders section. If this is not the first contribution you make, then simply update the time period contained in the copyright entry to use the year of your first contribution as the lower boundary and the current year as the upper boundary, e.g.,
Copyright 2017, 2018 ACME Corporation
Build
- A pipeline run will be triggered on every PR merge. This run will trigger the hugo docs build
- Hugo v0.98.0 extended is set up for the runner
- Docsy theme is setup for beautification of static site
- Then dependencies are installed for the theme
- Static site is generated and stored in a folder "public"
- The contents of public are committed to gh_pages branch which is exposed to host the GitHub pages