Spring Framework 4.3: AsyncRestTemplate interceptors

After a short break I would like to get back with very interesting topic. Not often you have ability to describe one of the upcoming features of widely used libraries like Spring. Last year I’ve co-authored really simple feature that adds to the Spring’s AsyncRestTemplates a very needed extension point: interceptors. So I would like to take here the liberty to describe them more deeply.

This topic might not be so much useful for most variety of use cases of the AsyncRestTemplate, unless you are developing yourself frameworks or libraries and  you are looking for seamless integration. The contract of the interceptor fallows as much as possible it’s RestTemplate’s counterpart.

public interface AsyncClientHttpRequestInterceptor {

    ListenableFuture intercept(HttpRequest request, byte[] body, AsyncClientHttpRequestExecution execution) throws IOException;

The major difference is that instead of returning the response object the interceptor has to work on a ListenableFuture – a observable promise that eventually will return a HTTP response.

The minimal implementation to intercept the response through the interceptor requires to add callback of the ListenableFuture. Example:

public class AsyncRequestInterceptor implements AsyncClientHttpRequestInterceptor {

   public ListenableFuture<ClientHttpResponse> intercept(HttpRequest request, byte[] body,
         AsyncClientHttpRequestExecution execution) throws IOException {

      ListenableFuture<ClientHttpResponse> future = execution.executeAsync(request, body);
            resp -> {
               // do something on success
            ex -> {
               // process error
      return future;

Why does introducing the interceptors is important or anyhow useful? If we would take a look of the existing functionality of RestTemplate that is provided through Spring Cloud Netflix, Spring Cloud Commons, Spring Cloud Security or Spring Cloud Sleuth we can list a bunch of interesting applications:

  • Ribbon client load balancing – this is in fact done through ClientHttpRequestFactory, though ClientHttpRequestInterceptor would be sufficient to achieve the same result.
  • Spring Cloud Security – uses them to add load balancing to the OAuth2RestTemplate.
  • Spring Cloud Sleuth – uses them to add tracing header to the outgoing request.

Some other example use cases:

  • Request/response logging
  • Retrying the requests with configurable back off strategy
  • Altering the request url address

You may expect this functionality available with the release of Spring Framework 4.3 and Spring Boot 1.4. Since open source projects have some inertia in development, any integration build on top of it for instance in Spring Cloud probably won’t be available until the 1.2 release.

Spring Cloud: Zuul Trie matcher


This time I would like to focus on the Spring Cloud Zuul proxy internals in terms of functionality and performance. If we would take a look at the ProxyRouteLocator#getMatchingRoute, this is the method used for finding route per each Zuul proxy call, it has two characteristics, first of all it iterates over list of all services to find the first one matching, second of all it stops when it find the first path that matches the request URI.

If we would attempt to determine the method running time and denote N – as the number of routes and M – as the maximum length of the configured URI path we can deduct that the running time of getMatchingRoute is O(N M). Now that is not bad. But we can do better. There is a very good know data structure that can help us with this goal, a Trie tree. Though a Trie can be represented in multiple ways, we are going to talk here mostly about the simple R-way tree structure that allows to associate a string key with specific value, similary as hashtable does. To simplify a Trie node contains the optional node value and list of links to the children nodes. The actual link to children is being associated with single character, this is why we end up with R-way structure. The down side is that we need extra memory and depending on how we are going to implement the data structure we may need a lot of  that. Simplest approach, is to use an array of object references, with every entry corresponding to a single character, this also resembles the direct access approach in implementing a hashtable.  Let’s consider what would happen if we would want to use alphabet of 256 ASCII codes – we would end up with in 64 bit JVM with 8 bytes per reference and overall 2 KB per node. For unicode this works worse then that with 65536 unique values we would need 512 KB per node.

The direct access technique would be great if it wouldn’t require such large amounts of memory. We could notice that in this specific case we don’t need to handle so many values since in case of URI the RFC3986 clearly states which characters are allowed. Though we can still do better then that. Instead of using a array to store the links why not to use a JDK Map? The Java’s HashMap is hashtable that uses separate chaining for conflict resolution and has default initial capacity of 16 (though this is implementation detail). That would save us a lot of memory for sparse nodes – those with few number of children.

We don’t have to stop there yet. There is one problem with general purpose Map, if we need to store the primitive char type as the map keys – our only option is the wrapper Character class. This will yield at least 4 times more memory only for storing the reference to the Character objects on heap then a actual primitive char array. Fortunately there is 3-rd party implementation of primitive type collections: Trove. The project might be a bit stale, with over 3 years since the last release, but that doesn’t really mather if we can use the existing functionality. The library defines TCharObjectHashMap that in contrary to JDK HashMap implements the open addressing approach and probing for finding the place for the entries. Also on the web you can find interesting comparision of different collection classes.


Considering the above introduction I’ve put that into practice and prepared an actual implementation:


With fallowing Trie implementations:

  • CharArrayTrie
  • HashMapTrie
  • CharHashMapTrie

So what are the gains due to replacement of the standard Spring Clouds Zuul route matching implementation? Our worst case running time is now O(M), where we have defined M – as the maximum length of the URI path, which means that we aren’t bound at all to the configured number of routes, the algorithm will perform the same fast in every case no matter whether you have tens or hundreds routes defined behind your edge gateway.

This is one of the best examples of time vs memory tradeoff I can think off.

Other considerations

The side effect of using the Trie tree is also that now we can find the best matching paths. For instance let consider two examples:

  • /products/**
  • /products/smartphones/**

Now we have a common prefix for both routes that is  the /products/ prefix. The standard implementation for route /products/smartphones/iphone could match either /products/** or /products/smartphones/** depending on the order in the property file, the Trie tree will match always the route that best matches your request URI.


At this point in time the extension is kept separately and alongside of the previous Cassandra store and Zuul route healthcheck modules that I’ve done and it does not really go along with them. In meaning that there is no way to have both Cassandra with Trie matcher enabled at this time. My goal would be to at least introduce the basic abstraction through pull requests in Spring Cloud Netflix that would allow to externalize and configure parts of Zuul integration and afterwards release the updated version of the modules.

Spring Cloud: Zuul Cassandra route storage

This time I would like to introduce a very small, but I think useful extension that overrides the standard Spring Clouds property based Zuul route configuration that most people are probably familiar with. Normally it probably looks more or less as fallows:

  ignoredServices: '*'
      path: /api/**
      serviceId: rest-service
      path: /uaa/**
      serviceId: oauth2-service
      stripPrefix: false

This is perfectly fine, if you have fairly small project with reasonable amount of services. Though, the problem arises when your organization defines numerous different services. Having all of them defined in a single text file does not seem to be a reasonable idea.

Instead you could read that from a external database, RDBMS probably would do the job, but a NoSQL database like Cassandra with configurable replication rate would fit here perfectly. Even the Cassandra eventual consistency model won’t be an issue in this particular case, since the data is loaded on application startup and refresh and cached afterwards. Meaning that every request made through Zuul proxy does not require a database query to be performed each time.

The extension tries to plug in seamless to the existing standard Zuul setup. In order to get it started you will need to add the module to your project dependencies:


Enable the altered Zuul proxy setup, and also register a CassandraTemplate in your application context:

public static class Application {

    public Cluster cluster() {
        return Cluster.builder()

    public CassandraOperations cassandraTemplate(Cluster cluster) {
        return new CassandraTemplate(cluster.connect("zuul"));

Configure the routes to be loaded from Cassandra, by adding to your application.yml:

      enabled: true

Finally connect to Cassandra and create a keyspace.

CREATE KEYSPACE IF NOT EXISTS zuul WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 };

USE zuul;

CREATE TABLE zuul_routes (
    id text,
    path text,
    service_id text,
    url text,
    strip_prefix boolean,
    retryable boolean,

You can notice that the schema fallows the basic structure of the properties equivalent. Finally you can populate the table with routes definition.

The source code is available at Github:


I am planning to do a fallow up to this blog post, although addressing a bit different issue related to Zuul path matching.

Spring Cloud: Ribbon dynamic routing

The problem

I like to focus on solving actual problems that I’ve faced during the project development by creating a generic solutions that would grant code reusability. I think that I have another good use case for defining such abstraction – Ribbon dynamic routing.

Ribbon and it’s Spring Cloud abstraction it’s a very convenient client load balancer implementation that through set of Spring Cloud integration points it can be seamlessly used with Spring’s RestTemplate, Feign clients or Zuul proxy.

In general dependeing on the use case and configuration you can configure it to use with completely fixed list of servers or discover the services at runtime from the discovery service like Netflix Eureka in both cases Ribbon will send the request in round robin fashion to the registered servers. That’s definitely covers most of common integration schemes between services, though what if you want to decide at runtime which services you would like to call?

I would like describe here a very particular scenario, let’s consider that you have been developing your system over time and decided to release completely new version of your publicly available REST API, that itself has been versioned. The actual scheme used here is not important the only consideration is when you have decided to fork you codebase to develop and deploy service as completely new standalone “entity” that will be working simultaneously to the any previous available and maintained versions. Now you face the problem of routing your request towards the services, the easiest and trivial approach is to use proper naming convention of your services when registering them in the discovery, like for instance add the version suffix. At this point you end up with names like recommendation-service-v1.0 or recommendation-service-v1.1. This would be fine, except you are bounding this extra information with your service discovery name. The other consideration is when you need more then one attribute based on which you want to route the user request. Fortunately the authors of Eureka has forseign such case and they allow to store with each individual instance a metadata in a form of key-value pairs. At this point we probably already understand what are we going to do next.

The use cases span beyond simple service versioning, since you have much flexibility you could for instance handle A/B testing based on the discovery metadata.

I want to cover a bit in depth how you can use this extra information through Spring Cloud working on top of the Netflix OSS project and how by combining together Ribbon and Eureka you could route for instance a HTTP request using RestTemplate to very specific backend service. In fact the implementation is going to be trivial, since all of the required functionality is there. We will simply query the discovery registry for metadata map and implement a custom Ribbon’s IRule to match the servers subset.

The Solution

The base interface that we need to implement and simply register as Spring bean is the IRule:

public interface IRule {

    public Server choose(Object key);
    public void setLoadBalancer(ILoadBalancer lb);
    public ILoadBalancer getLoadBalancer();    

In fact there is a convenient abstract PredicateBasedRule class that gives out of the box a set up useful functionality.

When the Eureka discovery is enabled the server list will be in fact instances of DiscoveryEnabledServer that gives straight forward access to InstanceInfo class that contains the described metadata map populated from Eureka. The only missing part is to define a “registry” which is going to be populated with required criteria.

So without further ado there is Spring Cloud starter that defines the needed glue code:


You can simply import it into your Gradle

compile 'io.jmnarloch:ribbon-discovery-filter-spring-cloud-starter:2.0.0'

or Maven project:


It provides a simple and lightweight layer on top of the described API with default implementation matching the defined properties against the Eureka’s metadata map. From this point it is simple to implement RestTemplate or Feign interceptor or for instance custom Zuul filter that would populate this information.

The Examples

To use the provided API you need to simply  populate the request attributes and then call the desired service.

RibbonFilterContextHolder.getCurrentContext().add("version", "1.0").add("variant", "A");

ResponseEntity<String> response = restOperations.getForEntity("http://recomendation-service/recomendations", String.class);

The defined entries are going to be matched againts the metadata and the server list will be filtered with instances matching exactly the specific values.

Probably it would be more reasonable to implement the attribute population in some generic manner through RestTemplate or Feign request interceptors.

Go Continuous Delivery: SBT plugin 1.0.0 released

I’ve release a first version of Go Continuous Delivery SBT plugin:


To use the SBT plugin, just download the release and copy it into:


After restaring the server you should be able to see the SBT plugin on the Plugin list and also be able to add the SBT task to your pipeline stages.


Go Continuous Delivery 14.4+
Java 7+

Spring Cloud: Sock.js + STOMP + Zuul = No WebSockets

This time I wanted to share my experience when I worked on setting up Zuul proxy in front of Websocket service.

Let’s start by stating that Zuul does not support WebSocket protocol: https://github.com/spring-cloud/spring-cloud-netflix/issues/163. So probably this post should have ended here. Despite that, I wanted to see what I will be able to achieve having already all the setup in place, with Zuul reverse proxies in front and WebSocket enabled service in back.


Our application fronted, a single web page, was using Sock.js. The library that has one crucial functionality, especially useful in this particular case, it implements multiple fallbacks protocols when WebSocket protocol is not supported. Initially it allowed to same API to work with different browsers, hiding all communication details. We facilitate this functionality as a workaround to communicate with our backing server through Zuul.


STOMP is messaging protocol that can be used on top of WebSocket (or HTTP) to communicate with more message orientated manner. Both on the client side and on the server side this is going to hide the details about routing the messages.

Spring Messaging

Spring Framework for a quite some time has been supporting both Sock.js and STOMP so we are going to use that.

Spring Integration

My goal was to connect our system internals, with the application UI. The backend already was exposing REST API. So I wanted to introduce a lightweight “WebSocket enabled” “fronted” that could be scaled separately and could simply forward the messages from fronted. We didn’t want to split the application logic across different modules, simply because they support different communication protocols, instead we have connected the web socket module with the core services through messaging system.

Spring Integration does integrate on top of Spring WebSocket support, although there are some rough edges. I wasn’t able to accomplish my goal as easly as advertised in reference: http://docs.spring.io/spring-integration/docs/4.2.1.RELEASE/reference/html/web-sockets.html and I’ve ended with construct quite similar to the below code:

public class NotificationEndpoint {

    @Resource(name = "inboundNotification")
    private DirectChannel notificationChannel;

    public void notification(String payload, Principal principal) {
        notificationChannel.send(new GenericMessage<>(payload, createHeaders(principal)));


Eventually we were using RabbitMQ in two bit different roles, first it was used as STOMP messaging relay. Allowing to scale out the WebSocket nodes. The second usage was to pass around the messages through AMQP, so that notification from backend could be forwarded through Sock.js and STOMP to UI.

Spring OAuth

We had been using OAuth2 for authentication and for that we needed to propagate the user authorization token through Sock.js to be able to authenticate. Due to the issue: https://github.com/spring-projects/spring-security-oauth/issues/478 you will have to use version 2.0.8 or otherwise you will be receiving 401 – Unauthorized status.


The Zuul setup initially was pretty standard, required only to configure proper route to discovered service.

  ignoredServices: '*'
      path: /ws/**
      serviceId: ws-frontend-service

Unfortunately the first tests that we performed, revealed that the client can not maintain the connection, which were continuously closed. Fortunately Sock.js was design in a way that it requires the server to send hearth beats. The solution was to properly set the Ribbon/Hystrix timeouts as described here. Eventually we had set the timeouts to the double of Sock.js heartbeat delays:

hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 50000
  ConnectTimeout: 3000
  ReadTimeout: 50000


Overall I could be satisfied with such solution, though it’s not perfect and long polling will be resource consuming (keeping open the connection on the proxy and occupying the application server thread – the web socket connections generally are handled by separate server thread pools). The nice side effect of above stack is that it should allow transparently move to different transport protocol, Sock.js and STOMP nicely hide all the communication details. But if you need real web socket support you will have to resign from Zuul. Hopefully with release of Zuul 2 this matter will change.

Spring Cloud 1.1: RxJava

The next big thing that is going to happen in Java world in upcoming future is standardization of the reactive programming API in JVM. Java 9 is going to most likely be bundled with Reactive Streams API. Also some existing libraries like RxJava 2.x are moving towards the same standardized API. From what you could learn from last SpringOne is that Spring 5 have plans for seamless integration with reactive paradigm.

Question is do you really need to wait to easily facilitate all of this goodies? The answer is not necessary, especially that you can already use libraries like RxJava.

Spring MVC integration

One nice kind of integration that you can get out of the box thanks to upcoming Spring Cloud Netflix 1.1.x release is support for Observable return types in your controller methods, which makes it convenient if you return Observable calls to remote service and perform some aggregation on the results.

With Spring Cloud Netflix 1.1.x on classpath you can transform your Spring MVC REST controllers into fallowing:

public class ProductController {
    private final ProductService productService;

    public ProductController(ProductService productService) {
        this.productService = productService;

    @RequestMapping(method = GET)
    public Observable<Product> getProductList() {

        return productService.getProducts();

You still need to configure TaskExecutor to be able to use this functionality as you would with standard DefferedResult, ListenableFuture or Callable return types.

Hystrix and Javanica

Hystric commands are capable of handling Observables return types as well.

Future possible integrations

I’ve happen to find this interesting pull request that brings the RxJava integration directly to Spring Data commons module, sadly it hasn’t been merged yet. Personally I think that it would be really convenient to call multiple queries at the same time through such API. Maybe this issue simply needs some extra support from community?

If you would be more interested in this topic I highly advice visiting the RxJava project page and some articles on the web.

Spring Cloud: Zuul routes health

When I had described the Go CD Health check plugin I’ve briefly talk about it’s use cases and that it can be set up for automatic delay of acceptance or stress tests. It’s time to describe a bit more in details and how it can be used with Spring Cloud application in particular. We wanted to run the tests of system user interface, but to do that we needed all the backing systems to be available. Since the application works in reverse proxy setup, we had simply await for the depending services to register with ‘UP’ state in our discovery service. So what we wanted to do, is to adjust the application health endpoint to make it aware whether the routes to backend services are available.

This required to register custom Spring Boot’s Actuator health indicator. Actually I had prepared a small utility project that will do this:


It can be turn on and off through settings ‘zulu.health.enabled’ flag:

  ignoredServices: '*'
    enabled: true
      path: /api/**
      serviceId: rest-service
      path: /uaa/**
      serviceId: oauth2
      stripPrefix: false

So you can configure it for your test/dev/stg environments and disable it for production, mostly because cascading the system failures might not be such a good idea. Though for our use case this worked more then well.

The extensions has it’s limitations, at this point it only checks the explicitly configures services (the routes with specified serviceId) and verifies whether they are accessible from the discovery service.

Health check in practice

Spring Cloud: Eureka, Zuul and OAuth2 – scaling out authorization server

We are going to touch here a very practical problem, scaling out the Spring OAuth2 authorization server and describing a bit more in detail how this can be done using the Spring Cloud itself. Your authorization server might be requested multiple times during single request processing so you would rather want it to be able to match the demand.

When I first approach the issue I didn’t fully understood which of functionalities Spring Cloud provides and made some false assumptions, so it might be worth describing how to do a basic setup, what is happening under the hood and what you might expect.

In Spring OAuth2 2.0.x you have three very helpful annotations that can enable different functionaries within your application:

  • @EnableOAuth2Resource
  • @EnableOAuth2Sso
  • @EnableOAuth2Client

I want to cover the first two in greater detail.

For each of the annotation there is a corresponding properties that must be to specified. The minimal resource server configuration needed to verify the token and retrieve the user information is to specify the userInfoUri property.

      userInfoUri: http://localhost:9999/uaa/user

This will allow for the incoming requests to be verified using Barer Token authentication.

So most likely you are going to enable resource server on probably every microservice that you are going to develop. On one of your edge services you may want to enable the single sign on. The @EnableOAuthSso will allow you to turn automatic redirection of your unauthicated user towards the authorization server, where they will be able to log in.

Similar you will have to specify some extra properties like the address of the authorization server token and authorization uri together with client credentials. We can also specify which paths within our application should be secured.

        secure: true
        path: /,/**/*.html
      accessTokenUri: http://localhost:9999/uaa/oauth/token
      userAuthorizationUri: http://localhost:9999/uaa/oauth/authorize
      clientId: edge
      clientSecret: secret

Now this is a minimal setup that is going work on a single node (of course you can put the authorization server behind any load balancer).

This is not all, if you import spring-cloud-security module (in version at least 1.0.2) you can expect fallowing. On each resource server the auto configuration class is going to setup OAuth2RestTemplate that will propagate the authentication token and if you happen to have discovery client enabled (by importing Eureka client or Consul and setting it up correctly) it will be also be capable of discovering and distributing the load towards registered services.

Also you can achieve the same for the Feign clients through custom extension.

Finally if you set spring.oauth2.resource.loadBalanced to true, you will configure client load balancing for retrieval of the user information from authorization servers (when you run up multiple nodes and register them in the discovery service):

      loadBalanced: true
      userInfoUri: http://oauth2/uaa/user

This feature isn’t specially documented, and I happen to found it by going through the Spring Cloud Security source code.

This is already neat, but what about our single sign-on? At the moment we had to specify a single address.

In the project that I had been working on we have addressed the issue in two different ways in two bit different configurations. In one of the applications that was a single-page application we completely resigned from the build in server side redirection and used instead AngularJS and simple AJAX POST for authentication. For that to work we had to either enable CORS (which the authorization server dosen’t do, but you can find a interesting article how to do this) or configure a reverse proxy – like Zuul. We had chosen the latter. Through simple setup you can route the authorization AJAX request through Zuul towards the authorization server:

  ignoredServices: '*'
      path: /uaa/**
      serviceId: oauth2
      stripPrefix: false

Finally if you want to use Spring’s single sign-on you can do that, by setting up the Zuul proxy as your edge gateway and correctly configure the route to the auto discovered nodes replacing accessTokenUri and userAuthorizationUri with the edge gateway address (preferable the DNS name). Of course there wouldn’t be any reason not to use any other load balancer instead, especially if your cloud provider offers one out of the box. Although the build in Zuul proxy auto discovery is very tempting to use here.

So to sum up, for your “internal” communication the resource servers can use the build in Spring Cloud load balancing support with help of the discovery service. When the “external” world needs to communicate with your OAuth2 server, you should put load balancer in front of it, with Netflix Zuul being natural candidate making also the solution portable across different cloud providers as well as containers, like Docker for instance.

With such setup you are now able to add as many authorization server nodes as you want to.