Understanding Dapr: Building a Pub/Sub System with Quarkus and Tracing
A Step-by-Step Guide to Implementing Distributed Messaging and Observability in Microservices Using Dapr and Quarkus with Kubernetes
👋 Introduction
I recently heard about DAPR in a podcast and wanted to learn more. I was especially interested in how it can shift focus from technical details to business capabilities.
I have no experience with this framework or much knowledge of service mesh (though DAPR is not a service mesh, as we'll see later). While there are many deployment options, I mainly work with AWS ECS rather than Kubernetes, so I wanted to try a local Kubernetes setup for this proof of concept. Similarly, I usually work with Spring but wanted to explore Quarkus for this project.
Let's see how we can combine these technologies!
Why DAPR and Kubernetes?
Besides my interest in learning new technologies, DAPR and Kubernetes can help in distributed systems by standardizing communication. In the following, I will list the motivation besides personal interest for the evaluation of this article:
Reducing boilerplate code: Microservices can require much infrastructure code that is often copied and pasted between different services. Also, other shared concerns like rate limiting have potential for standardization as they do not differ dramatically between services, besides maybe some config tweaks.
Standardized communication: Teams might implement Pub/Sub differently, like with AWS SNS/SQS or Kafka. State management might be implemented differently, and this makes maintainability and extendability harder. Other factors might include licensing costs, simpler onboarding, better knowledge sharing across teams, and more.
Technology agnostic: Changing underlying infrastructure can be important, and DAPR allows switches without affecting your application code. For instance, you could switch from one hyperscaler to another.
Local development experience: I like to test my systems locally, and DAPR allows for a full system that runs locally with minimal effort in setting up. We will see this later in action.
Domain focus: Developers can focus on the business side of the code, so it aligns well with domain-driven design (DDD).
Brief Introduction to Dapr and Kubernetes
Dapr (Distributed Application Runtime) is an open-source runtime that simplifies the creation of distributed applications. Introduced in 2019, Dapr offers standardized components that are available over HTTP or gRPC APIs. These components include Pub/Sub, monitoring, state management, and more. Each of them can run as a sidecar, so the component runs in parallel to your application and handles infrastructure matters. Dapr has some overlapping functionalities with a service mesh if we only look at network concerns, but Dapr also offers dev tools for creating microservices. So the choice between Dapr and a service mesh is not either-or and depends on your requirements. You can use both together if you want, but generally, Dapr is better for pub/sub and state management, while a service mesh might be more appropriate for traffic encryption or A/B testing with traffic splitting.
Kubernetes is an open-source technology to orchestrate containers, and you might already have heard of it if you are in the cloud and DevOps world. It helps with supplying, scaling, and managing containerized workloads and offers functionality around service discovery, load balancing, and scaling. It was originally developed by Google and published in 2014, and nowadays acts as the de facto standard for container orchestration. For our purposes, Kubernetes provides the perfect environment to test Dapr's capabilities in a local and controllable environment that resembles a production environment.
The following section will guide you through the creation of a pub/sub system that is distributed and created with Dapr. Containers are managed with Kubernetes, and app services are running with Quarkus.
🏗️ Project Structure and Decisions
.
├── components/
│ ├── config.yaml # DAPR observability configuration
│ └── pubsub.yaml # Redis pub/sub component
├── k8s/
│ ├── publisher.yaml # Publisher service deployment
│ └── subscriber.yaml # Subscriber service deployment
├── publisher-service/ # Publisher Quarkus application
├── subscriber-service/ # Subscriber Quarkus application
└── gradle files...
Why This Structure?
Separation of concerns: The project is organized to separate business microservices from the infrastructure configuration that deploys them.
Isolation of DAPR components: DAPR components are kept in their directory, as they are dependent on the platform, but not the application. This simplifies maintenance and allows for replacing components without modifying the microservices directly.
Kubernetes Config: The
k8s/
directory contains the Kubernetes deployment configuration. We separate these from both the application code and the Dapr components to enable independent lifecycle management. This approach allows us to update deployment settings (like scaling or resource limits) without touching application code or Dapr components. Alternatives would include storing configs alongside each service, but this centralized approach makes it easier to see the entire system's deployment at once.
Setup and Deployment Decisions
Prerequisites
If you want to follow along, make sure you have the following setup on your machine:
You can find all of the source code in this GitHub Repository.
Clean Slate Approach
The setup of our PoC begins with a thorough cleanup that prunes existing deployments and Dapr components. This ensures reproducibility and prevents conflicts with existing or partially cleaned-up configurations.
# Delete any existing deployments
kubectl delete -f k8s/ 2>/dev/null || true
kubectl delete -f components/ 2>/dev/null || true
# Remove DAPR and its components
dapr uninstall -k --all
# Remove any leftover Redis and Zipkin resources
kubectl delete deployment,service,secret -l app=dapr-dev-zipkin 2>/dev/null || true
kubectl delete deployment,service,secret -l app=dapr-dev-redis 2>/dev/null || true
kubectl delete statefulset,service,secret -l app=dapr-dev-redis 2>/dev/null || true
kubectl delete pvc -l app=dapr-dev-redis 2>/dev/null || true
# Delete all resources in default namespace with specific labels
kubectl delete all,secrets,configmaps -l app=publisher -n default 2>/dev/null || true
kubectl delete all,secrets,configmaps -l app=subscriber -n default 2>/dev/null || true
kubectl delete all,secrets,configmaps -l app=dapr-dev-redis -n default 2>/dev/null || true
kubectl delete all,secrets,configmaps -l app=dapr-dev-zipkin -n default 2>/dev/null || true
# Check for any remaining Helm releases and remove if found
helm ls -A | grep -E 'dapr|redis|zipkin' | awk '{print $1}' | xargs -r helm uninstall
DAPR Installation with Development Components
We initialize our Dapr Environment like this:
dapr init -k --dev
With the --dev
flag, we automatically set up Redis for state management and pub/sub, and Zipkin for tracing without needing to configure them manually. For production, we should not use this flag as the Redis instance is not configured for high availability (Redis can be configured for persistence, but in this dev setup, it's primarily used as an in-memory store). However, this setup is ideal for test and development environments. With Zipkin, we can trace the messages we send, when they are consumed, and how much time is spent during that process.
Quarkus Services
In the past, I have mostly written about Spring, and my professional experience with Quarkus is rather limited compared to that. Still, I like the framework and wanted to use it for this use case and step out of my comfort zone.
Nowadays, the differences between both frameworks in terms of performance are getting smaller (see comparison: https://maddevs.io/blog/spring-boot-vs-quarkus/). Quarkus performs well in terms of heap memory usage in cloud environments and is described as "Kube-native," meaning it's specifically designed to work optimally in Kubernetes environments with features like fast startup, low memory footprint, and native compilation options.
Publisher
We use Quarkus non-reactive for simplicity and have to call .block()
here for it to work. In a production environment for better performance, we could have used the Quarkus reactive Mutiny framework in conjunction with the non-blocking Dapr client to stay reactive.
@POST
public Response publishMessage(Message message) {
LOG.info("Attempting to publish message: " + message.getContent());
try {
// Set timestamp if not already set
if (message.getTimestamp() == 0) {
message.setTimestamp(System.currentTimeMillis());
}
client.publishEvent(pubsubName, topic, message).block();
LOG.info("Successfully published message to " + topic);
return Response.ok().build();
} catch (Exception e) {
LOG.error("Failed to publish message", e);
return Response.serverError().entity(e.getMessage()).build();
}
}
Subscriber
@GET
@Path("/dapr/subscribe")
@Produces(MediaType.APPLICATION_JSON)
public Response getSubscriptions() {
var subscription = Map.of(
"pubsubName", pubsubName,
"topic", topic,
"route", "/messages"
);
LOG.info("Returning subscription configuration: " + subscription);
return Response.ok(Collections.singletonList(subscription)).build();
}
Dapr doesn't automatically discover message endpoints without the subscription declaration. The /dapr/subscribe
endpoint explicitly tells Dapr which pub/sub component, topic, and route to use, serving as a registration mechanism that enables the runtime to know where to deliver messages when they arrive.
@Path("/messages")
@POST
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
@Topic(name = "messages", pubsubName = "pubsub")
public Response receiveMessage(CloudEvent<Message> cloudEvent) {
try {
LOG.info("=== RECEIVED CLOUD EVENT === " + cloudEvent);
if (cloudEvent == null) {
LOG.warn("Received null cloud event");
return Response.status(Response.Status.BAD_REQUEST).build();
}
Message message = cloudEvent.getData();
if (message != null) {
LOG.infof("=== RECEIVED MESSAGE CONTENT: %s at timestamp: %d ===",
message.getContent(),
message.getTimestamp());
return Response.ok().build();
} else {
LOG.warn("Received null message data in cloud event");
return Response.status(Response.Status.BAD_REQUEST).build();
}
} catch (Exception e) {
LOG.error("Error processing received message", e);
return Response.serverError().build();
}
}
Our consumer gets the data and logs it. There is not much more going on here, but for our showcase, that is enough. This information will also be available later in Zipkin for tracing purposes.
Since we have a Gradle multi-project structure, we can just go to our root directory and build everything with:
./gradlew clean build
After this build command completes successfully, we can continue with our deployment steps.
cd publisher-service
docker build -t publisher:latest .
cd -
cd subscriber-service
docker build -t subscriber:latest .
DAPR Components Configuration
We have two files for configuring the pub/sub and observability.
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: pubsub
namespace: default
spec:
type: pubsub.redis
version: v1
metadata:
- name: redisHost
value: dapr-dev-redis-master:6379
- name: redisPassword
secretKeyRef:
name: dapr-dev-redis
key: redis-password
- name: enableTLS
value: "false"
- name: maxRetries
value: "5"
- name: maxRetryBackoff
value: "10s"
- name: redeliverInterval
value: "30s"
- name: processingTimeout
value: "60s"
Redis was chosen here for its simplicity in our development setup and pub/sub operations. We are using Kubernetes secrets for the password and some pretty default retry and backoff settings. For our development, we have disabled TLS for simplicity.
apiVersion: dapr.io/v1alpha1
kind: Configuration
metadata:
name: dapr.config
namespace: default
spec:
tracing:
samplingRate: "1"
zipkin:
endpointAddress: "http://dapr-dev-zipkin.default.svc.cluster.local:9411/api/v2/spans"
metric:
enabled: true
The sampling rate of 1 captures all our traces; you might use a lower value for a production setup. Zipkin provides a simple UI that we can use for trace visualization and analysis. We collect metrics for performance monitoring and analysis by setting metrics enabled to true.
Finally, we can apply the components to our local Kubernetes cluster and start deploying the Dapr configurations as Kubernetes resources.
kubectl apply -f components/
For an overview of the DAPR resources on Kubernetes, check this.
Kubernetes Deployment Configuration
Sidecar injection with dapr.io/enabled: "true"
gets Dapr working alongside our app. The app-id is crucial for service discovery and component access. App-port tells Dapr where our app listens. We use debug logging during development to see what's happening. The config reference links to our tracing setup for observability across services.
annotations:
dapr.io/enabled: "true"
dapr.io/app-id: "publisher"
dapr.io/app-port: "8080"
dapr.io/app-protocol: "http"
dapr.io/log-level: "debug"
dapr.io/config: "dapr.config"
We have also set imagePullPolicy: Never
because we only use local images in our setup. If you want to deploy something, this also needs to be changed to Always
, for example. The settings mentioned here and in the annotations block above apply to both the publisher and subscriber.
By running the following command, we deploy our publisher and subscriber services.
kubectl apply -f k8s/
🕵🏻 Testing our setup
Let's verify our deployment is up and running. After deploying, check the pods with kubectl get pods
. You should see something like:
NAME READY STATUS RESTARTS AGE
publisher-xxx-xxx 2/2 Running 0 1m
subscriber-xxx-xxx 2/2 Running 0 1m
dapr-dev-redis-master-0 1/1 Running 0 5m
dapr-dev-zipkin-xxx-xxx 1/1 Running 0 5m
The 2/2 READY count confirms both app and Dapr sidecar containers are running properly.
To test the pub/sub system, first port-forward the publisher with kubectl port-forward svc/publisher 8080:8080
. Send a test message using curl:
curl -v -X POST http://localhost:8080/api/messages \
-H "Content-Type: application/json" \
-d '{"content":"Hello DAPR!"}'
Check the subscriber logs kubectl logs -l app=subscriber -c subscriber
to verify message delivery. You can also check the Dapr sidecar logs if needed.
For observability, port-forward Zipkin with kubectl port-forward svc/dapr-dev-zipkin 9411:9411
and open http://localhost:9411
You'll see publisher and subscriber services, operations like /api/messages and /messages, and trace data showing message flow like in the image below.
🏁 TLDR - DAPR Evaluation
DAPR reduces boilerplate code in microservices by standardizing infrastructure concerns
Technology-agnostic approach enables easy infrastructure switching without code changes
Local development setup with Kubernetes and DAPR provides a production-like environment
Sidecar pattern injects companion containers handling communication, pub/sub, and state management
Quarkus services benefit from DAPR's HTTP/gRPC APIs with minimal integration code
Redis chosen for pub/sub in development due to simplicity
Zipkin tracing provides observability across service boundaries
For enterprise teams, the benefits can outweigh the configuration overhead
For personal projects, the setup complexity feels too much