Spring Boot: Hystrix and ThreadLocals


Last time I have described a quite useful, at least from my perspective extensions for RxJava, but overall it defined only a syntactic sugar so that you could easily specify your custom RxJava Scheduler. One of the mentioned applications was very relevant to this blog post and it’s was related to be able to pass around ThreadLocal variables. As in case of RxJava whenever we will spawn a new thread, through subscribing to Scheduler, it’s going to lose any context that was stored within the ThreadLocal variables of the “outer” thread that initiated the task.

The same applies to Hystrix commands.

Initially, the credit for this idea should go to the development team back in my previous company – Allegro Tech, but it so much recurring problem that others has solve it in the past. Yet again I had the need to solve it once again.

Let’s say that I would like to execute the fallowing command, lets put a side for a moment the sense of doing so, only to illustrate the problem:

new HystrixCommand<Object>(commandKey()) {
    protected Object run() throws Exception {
        return RequestContextHolder.currentRequestAttributes().getAttribute("RequestId", SCOPE_REQUEST);

Puff – the data is gone.

Even when run in server container in “context” of request bound thread the above code will end with exception. This happens because Hystrix by default will spawn a new thread for executing the code, a side from the Semaphore mode that can be also used. Hystrix manages it’s own thread pools for the commands which will have no relocation to the any context stored in ThreadLocal of the triggering thread.

Overall ThreadLocal variables might be considered as anti-pattern by some, but it’s really so useful in many practical scenarios that it’s really not such uncommon that quite a few libraries depend on those.

Typically your logging MDC context or in case Spring Framework the security Authentication/Principal or the request/session scoped beans etc. So it quite important for some use cases to be able to correctly pass such information. Imagine a typical use case that you are trying to use OAuth2RestTemplate  in @HystrixCommand annotated method. Sadly this isn’t going to work.

The solution

Fortunately the designers of Hystrix library have anticipated such use case and designed the proper extension points. Basically the idea is to enable to decorate the executing task with your own logic, once it’s going to invoked by the thread.

On top of that I’ve prepared a small Spring Boot integration module:


At this point the implementation is fairly simple and a bit limited. In order to pass the specific thread bounded value you need to provided a your custom implementation of HystrixCallableWrapper.

For instance to “fix” the above snippet we can register as a bean fallowing class:

public class RequestAttributeAwareCallableWrapper implements HystrixCallableWrapper {

    public <T> Callable<T> wrapCallable(Callable<T> callable) {
        return new RequestAttributeAwareCallable<>(callable, RequestContextHolder.currentRequestAttributes());

    private static class RequestAttributeAwareCallable<T> implements Callable<T> {

        private final Callable<T> callable;
        private final RequestAttributes requestAttributes;

        public RequestAttributeAwareCallable(Callable<T> callable, RequestAttributes requestAttributes) {
            this.callable = callable;
            this.requestAttributes = requestAttributes;

        public T call() throws Exception {

            try {
                return callable.call();
            } finally {

Adding Java 8 syntactic sugar

It quite soon struck me that this is rather boilerplate implementation, because for every variable that we would like to share we would have to implement pretty much similar class.

So why not to try a bit different approach, simply create a “template” implementation that could be conveniently filled with specific implementation at the the defined “extension” points. Considerably the Java 8 method reference could be quite useful here, mostly because in typical scenario the operations that would be performed would be rather limited to: retrieving value, setting it and finally clearing any track of it.

public class HystrixCallableWrapperBuilder<T> {

    private final Supplier<T> supplier;

    private Consumer<T> before;

    private Consumer<T> after;

    public HystrixCallableWrapperBuilder(Supplier<T> supplier) {
        this.supplier = supplier;

    public static <T> HystrixCallableWrapperBuilder<T> usingContext(Supplier<T> supplier) {
        return new HystrixCallableWrapperBuilder<>(supplier);

    public HystrixCallableWrapperBuilder<T> beforeCall(Consumer<T> before) {
        this.before = before;
        return this;

    public HystrixCallableWrapperBuilder<T> beforeCallExecute(Runnable before) {
        this.before = ctx -> before.run();
        return this;

    public HystrixCallableWrapperBuilder<T> afterCall(Consumer<T> after) {
        this.after = after;
        return this;

    public HystrixCallableWrapperBuilder<T> afterCallExecute(Runnable after) {
        this.after = ctx -> after.run();
        return this;

    public HystrixCallableWrapper build() {
        return new HystrixCallableWrapper() {
            public <V> Callable<V> wrapCallable(Callable<V> callable) {
                return new AroundHystrixCallableWrapper<V>(callable, supplier.get(), before, after);

    private class AroundHystrixCallableWrapper<V> implements Callable<V> {

        private final Callable<V> callable;

        private final T context;

        private final Consumer<T> before;

        private final Consumer<T> after;

        public AroundHystrixCallableWrapper(Callable<V> callable, T context, Consumer<T> before, Consumer<T> after) {
            this.callable = callable;
            this.context = context;
            this.before = before;
            this.after = after;

        public V call() throws Exception {
            try {
                return callable.call();
            } finally {

        private void before() {
            if (before != null) {

        private void after() {
            if (after != null) {

Above code is not part of the described extension, but you may use it freely as you wish.

Afterwards we may instantiate as many wrappers as we would like to:


As a result we would be able to pass for instance MDC context or Spring Security Authentication or any other data that would be needed.

Separation of concerns

This is clearly a cleaner solution, but still has one fundamental drawback: it requires to specify the logic for every single ThreadLocal variable separately. It would be way more convenient to have only  define the logic of passing the variables across boundaries of threads between the Hystrix or any other library like for instance RxJava. The only catch is that in order to do so those variables would have be first identified and encapsulated in proper abstraction.

I think such idea would be worth implementing, though I don’t have yet a complete solution for it.


Nevertheless I would be interested in developing PoC of such generic solution and as done in the past, once again prepare a pull request for instance to Spring Cloud to provide such end to end functionality.

Spring Cloud: Zuul Trie matcher


This time I would like to focus on the Spring Cloud Zuul proxy internals in terms of functionality and performance. If we would take a look at the ProxyRouteLocator#getMatchingRoute, this is the method used for finding route per each Zuul proxy call, it has two characteristics, first of all it iterates over list of all services to find the first one matching, second of all it stops when it find the first path that matches the request URI.

If we would attempt to determine the method running time and denote N – as the number of routes and M – as the maximum length of the configured URI path we can deduct that the running time of getMatchingRoute is O(N M). Now that is not bad. But we can do better. There is a very good know data structure that can help us with this goal, a Trie tree. Though a Trie can be represented in multiple ways, we are going to talk here mostly about the simple R-way tree structure that allows to associate a string key with specific value, similary as hashtable does. To simplify a Trie node contains the optional node value and list of links to the children nodes. The actual link to children is being associated with single character, this is why we end up with R-way structure. The down side is that we need extra memory and depending on how we are going to implement the data structure we may need a lot of  that. Simplest approach, is to use an array of object references, with every entry corresponding to a single character, this also resembles the direct access approach in implementing a hashtable.  Let’s consider what would happen if we would want to use alphabet of 256 ASCII codes – we would end up with in 64 bit JVM with 8 bytes per reference and overall 2 KB per node. For unicode this works worse then that with 65536 unique values we would need 512 KB per node.

The direct access technique would be great if it wouldn’t require such large amounts of memory. We could notice that in this specific case we don’t need to handle so many values since in case of URI the RFC3986 clearly states which characters are allowed. Though we can still do better then that. Instead of using a array to store the links why not to use a JDK Map? The Java’s HashMap is hashtable that uses separate chaining for conflict resolution and has default initial capacity of 16 (though this is implementation detail). That would save us a lot of memory for sparse nodes – those with few number of children.

We don’t have to stop there yet. There is one problem with general purpose Map, if we need to store the primitive char type as the map keys – our only option is the wrapper Character class. This will yield at least 4 times more memory only for storing the reference to the Character objects on heap then a actual primitive char array. Fortunately there is 3-rd party implementation of primitive type collections: Trove. The project might be a bit stale, with over 3 years since the last release, but that doesn’t really mather if we can use the existing functionality. The library defines TCharObjectHashMap that in contrary to JDK HashMap implements the open addressing approach and probing for finding the place for the entries. Also on the web you can find interesting comparision of different collection classes.


Considering the above introduction I’ve put that into practice and prepared an actual implementation:


With fallowing Trie implementations:

  • CharArrayTrie
  • HashMapTrie
  • CharHashMapTrie

So what are the gains due to replacement of the standard Spring Clouds Zuul route matching implementation? Our worst case running time is now O(M), where we have defined M – as the maximum length of the URI path, which means that we aren’t bound at all to the configured number of routes, the algorithm will perform the same fast in every case no matter whether you have tens or hundreds routes defined behind your edge gateway.

This is one of the best examples of time vs memory tradeoff I can think off.

Other considerations

The side effect of using the Trie tree is also that now we can find the best matching paths. For instance let consider two examples:

  • /products/**
  • /products/smartphones/**

Now we have a common prefix for both routes that is  the /products/ prefix. The standard implementation for route /products/smartphones/iphone could match either /products/** or /products/smartphones/** depending on the order in the property file, the Trie tree will match always the route that best matches your request URI.


At this point in time the extension is kept separately and alongside of the previous Cassandra store and Zuul route healthcheck modules that I’ve done and it does not really go along with them. In meaning that there is no way to have both Cassandra with Trie matcher enabled at this time. My goal would be to at least introduce the basic abstraction through pull requests in Spring Cloud Netflix that would allow to externalize and configure parts of Zuul integration and afterwards release the updated version of the modules.

Spring Cloud: Zuul Cassandra route storage

This time I would like to introduce a very small, but I think useful extension that overrides the standard Spring Clouds property based Zuul route configuration that most people are probably familiar with. Normally it probably looks more or less as fallows:

  ignoredServices: '*'
      path: /api/**
      serviceId: rest-service
      path: /uaa/**
      serviceId: oauth2-service
      stripPrefix: false

This is perfectly fine, if you have fairly small project with reasonable amount of services. Though, the problem arises when your organization defines numerous different services. Having all of them defined in a single text file does not seem to be a reasonable idea.

Instead you could read that from a external database, RDBMS probably would do the job, but a NoSQL database like Cassandra with configurable replication rate would fit here perfectly. Even the Cassandra eventual consistency model won’t be an issue in this particular case, since the data is loaded on application startup and refresh and cached afterwards. Meaning that every request made through Zuul proxy does not require a database query to be performed each time.

The extension tries to plug in seamless to the existing standard Zuul setup. In order to get it started you will need to add the module to your project dependencies:


Enable the altered Zuul proxy setup, and also register a CassandraTemplate in your application context:

public static class Application {

    public Cluster cluster() {
        return Cluster.builder()

    public CassandraOperations cassandraTemplate(Cluster cluster) {
        return new CassandraTemplate(cluster.connect("zuul"));

Configure the routes to be loaded from Cassandra, by adding to your application.yml:

      enabled: true

Finally connect to Cassandra and create a keyspace.

CREATE KEYSPACE IF NOT EXISTS zuul WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 };

USE zuul;

CREATE TABLE zuul_routes (
    id text,
    path text,
    service_id text,
    url text,
    strip_prefix boolean,
    retryable boolean,

You can notice that the schema fallows the basic structure of the properties equivalent. Finally you can populate the table with routes definition.

The source code is available at Github:


I am planning to do a fallow up to this blog post, although addressing a bit different issue related to Zuul path matching.

Spring Cloud: Ribbon dynamic routing

The problem

I like to focus on solving actual problems that I’ve faced during the project development by creating a generic solutions that would grant code reusability. I think that I have another good use case for defining such abstraction – Ribbon dynamic routing.

Ribbon and it’s Spring Cloud abstraction it’s a very convenient client load balancer implementation that through set of Spring Cloud integration points it can be seamlessly used with Spring’s RestTemplate, Feign clients or Zuul proxy.

In general dependeing on the use case and configuration you can configure it to use with completely fixed list of servers or discover the services at runtime from the discovery service like Netflix Eureka in both cases Ribbon will send the request in round robin fashion to the registered servers. That’s definitely covers most of common integration schemes between services, though what if you want to decide at runtime which services you would like to call?

I would like describe here a very particular scenario, let’s consider that you have been developing your system over time and decided to release completely new version of your publicly available REST API, that itself has been versioned. The actual scheme used here is not important the only consideration is when you have decided to fork you codebase to develop and deploy service as completely new standalone “entity” that will be working simultaneously to the any previous available and maintained versions. Now you face the problem of routing your request towards the services, the easiest and trivial approach is to use proper naming convention of your services when registering them in the discovery, like for instance add the version suffix. At this point you end up with names like recommendation-service-v1.0 or recommendation-service-v1.1. This would be fine, except you are bounding this extra information with your service discovery name. The other consideration is when you need more then one attribute based on which you want to route the user request. Fortunately the authors of Eureka has forseign such case and they allow to store with each individual instance a metadata in a form of key-value pairs. At this point we probably already understand what are we going to do next.

The use cases span beyond simple service versioning, since you have much flexibility you could for instance handle A/B testing based on the discovery metadata.

I want to cover a bit in depth how you can use this extra information through Spring Cloud working on top of the Netflix OSS project and how by combining together Ribbon and Eureka you could route for instance a HTTP request using RestTemplate to very specific backend service. In fact the implementation is going to be trivial, since all of the required functionality is there. We will simply query the discovery registry for metadata map and implement a custom Ribbon’s IRule to match the servers subset.

The Solution

The base interface that we need to implement and simply register as Spring bean is the IRule:

public interface IRule {

    public Server choose(Object key);
    public void setLoadBalancer(ILoadBalancer lb);
    public ILoadBalancer getLoadBalancer();    

In fact there is a convenient abstract PredicateBasedRule class that gives out of the box a set up useful functionality.

When the Eureka discovery is enabled the server list will be in fact instances of DiscoveryEnabledServer that gives straight forward access to InstanceInfo class that contains the described metadata map populated from Eureka. The only missing part is to define a “registry” which is going to be populated with required criteria.

So without further ado there is Spring Cloud starter that defines the needed glue code:


You can simply import it into your Gradle

compile 'io.jmnarloch:ribbon-discovery-filter-spring-cloud-starter:2.0.0'

or Maven project:


It provides a simple and lightweight layer on top of the described API with default implementation matching the defined properties against the Eureka’s metadata map. From this point it is simple to implement RestTemplate or Feign interceptor or for instance custom Zuul filter that would populate this information.

The Examples

To use the provided API you need to simply  populate the request attributes and then call the desired service.

RibbonFilterContextHolder.getCurrentContext().add("version", "1.0").add("variant", "A");

ResponseEntity<String> response = restOperations.getForEntity("http://recomendation-service/recomendations", String.class);

The defined entries are going to be matched againts the metadata and the server list will be filtered with instances matching exactly the specific values.

Probably it would be more reasonable to implement the attribute population in some generic manner through RestTemplate or Feign request interceptors.

Go Continuous Delivery: SBT plugin 1.0.0 released

I’ve release a first version of Go Continuous Delivery SBT plugin:


To use the SBT plugin, just download the release and copy it into:


After restaring the server you should be able to see the SBT plugin on the Plugin list and also be able to add the SBT task to your pipeline stages.


Go Continuous Delivery 14.4+
Java 7+

Spring Cloud: Sock.js + STOMP + Zuul = No WebSockets

This time I wanted to share my experience when I worked on setting up Zuul proxy in front of Websocket service.

Let’s start by stating that Zuul does not support WebSocket protocol: https://github.com/spring-cloud/spring-cloud-netflix/issues/163. So probably this post should have ended here. Despite that, I wanted to see what I will be able to achieve having already all the setup in place, with Zuul reverse proxies in front and WebSocket enabled service in back.


Our application fronted, a single web page, was using Sock.js. The library that has one crucial functionality, especially useful in this particular case, it implements multiple fallbacks protocols when WebSocket protocol is not supported. Initially it allowed to same API to work with different browsers, hiding all communication details. We facilitate this functionality as a workaround to communicate with our backing server through Zuul.


STOMP is messaging protocol that can be used on top of WebSocket (or HTTP) to communicate with more message orientated manner. Both on the client side and on the server side this is going to hide the details about routing the messages.

Spring Messaging

Spring Framework for a quite some time has been supporting both Sock.js and STOMP so we are going to use that.

Spring Integration

My goal was to connect our system internals, with the application UI. The backend already was exposing REST API. So I wanted to introduce a lightweight “WebSocket enabled” “fronted” that could be scaled separately and could simply forward the messages from fronted. We didn’t want to split the application logic across different modules, simply because they support different communication protocols, instead we have connected the web socket module with the core services through messaging system.

Spring Integration does integrate on top of Spring WebSocket support, although there are some rough edges. I wasn’t able to accomplish my goal as easly as advertised in reference: http://docs.spring.io/spring-integration/docs/4.2.1.RELEASE/reference/html/web-sockets.html and I’ve ended with construct quite similar to the below code:

public class NotificationEndpoint {

    @Resource(name = "inboundNotification")
    private DirectChannel notificationChannel;

    public void notification(String payload, Principal principal) {
        notificationChannel.send(new GenericMessage<>(payload, createHeaders(principal)));


Eventually we were using RabbitMQ in two bit different roles, first it was used as STOMP messaging relay. Allowing to scale out the WebSocket nodes. The second usage was to pass around the messages through AMQP, so that notification from backend could be forwarded through Sock.js and STOMP to UI.

Spring OAuth

We had been using OAuth2 for authentication and for that we needed to propagate the user authorization token through Sock.js to be able to authenticate. Due to the issue: https://github.com/spring-projects/spring-security-oauth/issues/478 you will have to use version 2.0.8 or otherwise you will be receiving 401 – Unauthorized status.


The Zuul setup initially was pretty standard, required only to configure proper route to discovered service.

  ignoredServices: '*'
      path: /ws/**
      serviceId: ws-frontend-service

Unfortunately the first tests that we performed, revealed that the client can not maintain the connection, which were continuously closed. Fortunately Sock.js was design in a way that it requires the server to send hearth beats. The solution was to properly set the Ribbon/Hystrix timeouts as described here. Eventually we had set the timeouts to the double of Sock.js heartbeat delays:

hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 50000
  ConnectTimeout: 3000
  ReadTimeout: 50000


Overall I could be satisfied with such solution, though it’s not perfect and long polling will be resource consuming (keeping open the connection on the proxy and occupying the application server thread – the web socket connections generally are handled by separate server thread pools). The nice side effect of above stack is that it should allow transparently move to different transport protocol, Sock.js and STOMP nicely hide all the communication details. But if you need real web socket support you will have to resign from Zuul. Hopefully with release of Zuul 2 this matter will change.

Spring Cloud 1.1: RxJava

The next big thing that is going to happen in Java world in upcoming future is standardization of the reactive programming API in JVM. Java 9 is going to most likely be bundled with Reactive Streams API. Also some existing libraries like RxJava 2.x are moving towards the same standardized API. From what you could learn from last SpringOne is that Spring 5 have plans for seamless integration with reactive paradigm.

Question is do you really need to wait to easily facilitate all of this goodies? The answer is not necessary, especially that you can already use libraries like RxJava.

Spring MVC integration

One nice kind of integration that you can get out of the box thanks to upcoming Spring Cloud Netflix 1.1.x release is support for Observable return types in your controller methods, which makes it convenient if you return Observable calls to remote service and perform some aggregation on the results.

With Spring Cloud Netflix 1.1.x on classpath you can transform your Spring MVC REST controllers into fallowing:

public class ProductController {
    private final ProductService productService;

    public ProductController(ProductService productService) {
        this.productService = productService;

    @RequestMapping(method = GET)
    public Observable<Product> getProductList() {

        return productService.getProducts();

You still need to configure TaskExecutor to be able to use this functionality as you would with standard DefferedResult, ListenableFuture or Callable return types.

Hystrix and Javanica

Hystric commands are capable of handling Observables return types as well.

Future possible integrations

I’ve happen to find this interesting pull request that brings the RxJava integration directly to Spring Data commons module, sadly it hasn’t been merged yet. Personally I think that it would be really convenient to call multiple queries at the same time through such API. Maybe this issue simply needs some extra support from community?

If you would be more interested in this topic I highly advice visiting the RxJava project page and some articles on the web.

Spring Cloud: Zuul routes health

When I had described the Go CD Health check plugin I’ve briefly talk about it’s use cases and that it can be set up for automatic delay of acceptance or stress tests. It’s time to describe a bit more in details and how it can be used with Spring Cloud application in particular. We wanted to run the tests of system user interface, but to do that we needed all the backing systems to be available. Since the application works in reverse proxy setup, we had simply await for the depending services to register with ‘UP’ state in our discovery service. So what we wanted to do, is to adjust the application health endpoint to make it aware whether the routes to backend services are available.

This required to register custom Spring Boot’s Actuator health indicator. Actually I had prepared a small utility project that will do this:


It can be turn on and off through settings ‘zulu.health.enabled’ flag:

  ignoredServices: '*'
    enabled: true
      path: /api/**
      serviceId: rest-service
      path: /uaa/**
      serviceId: oauth2
      stripPrefix: false

So you can configure it for your test/dev/stg environments and disable it for production, mostly because cascading the system failures might not be such a good idea. Though for our use case this worked more then well.

The extensions has it’s limitations, at this point it only checks the explicitly configures services (the routes with specified serviceId) and verifies whether they are accessible from the discovery service.

Health check in practice

Spring Cloud: Eureka, Zuul and OAuth2 – scaling out authorization server

We are going to touch here a very practical problem, scaling out the Spring OAuth2 authorization server and describing a bit more in detail how this can be done using the Spring Cloud itself. Your authorization server might be requested multiple times during single request processing so you would rather want it to be able to match the demand.

When I first approach the issue I didn’t fully understood which of functionalities Spring Cloud provides and made some false assumptions, so it might be worth describing how to do a basic setup, what is happening under the hood and what you might expect.

In Spring OAuth2 2.0.x you have three very helpful annotations that can enable different functionaries within your application:

  • @EnableOAuth2Resource
  • @EnableOAuth2Sso
  • @EnableOAuth2Client

I want to cover the first two in greater detail.

For each of the annotation there is a corresponding properties that must be to specified. The minimal resource server configuration needed to verify the token and retrieve the user information is to specify the userInfoUri property.

      userInfoUri: http://localhost:9999/uaa/user

This will allow for the incoming requests to be verified using Barer Token authentication.

So most likely you are going to enable resource server on probably every microservice that you are going to develop. On one of your edge services you may want to enable the single sign on. The @EnableOAuthSso will allow you to turn automatic redirection of your unauthicated user towards the authorization server, where they will be able to log in.

Similar you will have to specify some extra properties like the address of the authorization server token and authorization uri together with client credentials. We can also specify which paths within our application should be secured.

        secure: true
        path: /,/**/*.html
      accessTokenUri: http://localhost:9999/uaa/oauth/token
      userAuthorizationUri: http://localhost:9999/uaa/oauth/authorize
      clientId: edge
      clientSecret: secret

Now this is a minimal setup that is going work on a single node (of course you can put the authorization server behind any load balancer).

This is not all, if you import spring-cloud-security module (in version at least 1.0.2) you can expect fallowing. On each resource server the auto configuration class is going to setup OAuth2RestTemplate that will propagate the authentication token and if you happen to have discovery client enabled (by importing Eureka client or Consul and setting it up correctly) it will be also be capable of discovering and distributing the load towards registered services.

Also you can achieve the same for the Feign clients through custom extension.

Finally if you set spring.oauth2.resource.loadBalanced to true, you will configure client load balancing for retrieval of the user information from authorization servers (when you run up multiple nodes and register them in the discovery service):

      loadBalanced: true
      userInfoUri: http://oauth2/uaa/user

This feature isn’t specially documented, and I happen to found it by going through the Spring Cloud Security source code.

This is already neat, but what about our single sign-on? At the moment we had to specify a single address.

In the project that I had been working on we have addressed the issue in two different ways in two bit different configurations. In one of the applications that was a single-page application we completely resigned from the build in server side redirection and used instead AngularJS and simple AJAX POST for authentication. For that to work we had to either enable CORS (which the authorization server dosen’t do, but you can find a interesting article how to do this) or configure a reverse proxy – like Zuul. We had chosen the latter. Through simple setup you can route the authorization AJAX request through Zuul towards the authorization server:

  ignoredServices: '*'
      path: /uaa/**
      serviceId: oauth2
      stripPrefix: false

Finally if you want to use Spring’s single sign-on you can do that, by setting up the Zuul proxy as your edge gateway and correctly configure the route to the auto discovered nodes replacing accessTokenUri and userAuthorizationUri with the edge gateway address (preferable the DNS name). Of course there wouldn’t be any reason not to use any other load balancer instead, especially if your cloud provider offers one out of the box. Although the build in Zuul proxy auto discovery is very tempting to use here.

So to sum up, for your “internal” communication the resource servers can use the build in Spring Cloud load balancing support with help of the discovery service. When the “external” world needs to communicate with your OAuth2 server, you should put load balancer in front of it, with Netflix Zuul being natural candidate making also the solution portable across different cloud providers as well as containers, like Docker for instance.

With such setup you are now able to add as many authorization server nodes as you want to.

Java 8 functional interfaces and varags functions

Fallowing the article about simple Partial Function application in Java, I had made a simple experiment with handling the varags methods using lambdas and functional interfaces. For functions that we can represent in a form of SAM interface:

public interface VarArgFunction<R, T> {

    R apply(T... args);

As mentioned in the previous post, you can assign in fact the variable argument method to any number argument interfaces as long as the arguments are all of the same type:

Function f = Functions::funcVarArgs;
Function1<String, String> f1 = Functions::funcVarArgs;
Function2<String, String, String> f2 = Functions::funcVarArgs;

But also we can have our function “assigned” to the vararg interface:

VarArgFunction<String, String> varargs = Functions::funcVarArgs;

The Java 8 default methods turn out to be a very powerful together with lambda functions, because we can now define instance methods for that interface, let’s try to implement similar partial function application as previously for standard functions.

    default VArgFunction<R, T> arg(T arg) {

        return (T... args) -> {
            final T[] arguments = (T[]) new Object[args.length + 1];
            arguments[0] = arg;
            System.arraycopy(args, 0, arguments, 1, args.length);
            return apply(arguments);

Unfortunately this is the best we can get with expanding the original arguments array. We will need to create a new array and copy all of the original values into it. The benefit of that is that we can now add the function arguments at runtime for instance in loop, and we are not bound by any constraints, except of the heap memory size:

for(String arg : args) {
  varargs = varargs.arg(arg);

String result = varargs.apply()

Fallowed by immediate method invocation to get the computation result.

Unfortunately this implementation has one significant problem, since it will create for every arg() call new object array instance the running time of apply method will be quadratic – O(N^2). The arg itself is performing it’s work in O(1) time, but when the function gets evaluated it will be cascading the apply calls with creating for each individual call new array.

Now can we do better then that?

Actually a simple wrapper around the interface will do the job, since there is already Partial class in the library let’s expand it with one extra definition:

static <R, T> VArgFunction<R, T> vargFunction(VArgFunction<R, T> function) {

        return new VArgFunction<R, T>() {

            private final List<T> args = new LinkedList<>();

            public R apply(T... args) {
                final T[] argsArray = (T[]) Array.newInstance(args.getClass().getComponentType(), args.length);
                return function.apply(this.args.toArray(argsArray));

            public VArgFunction<R, T> arg(T arg) {
                return this;

This time we are wrapping around the original functional interface and store the arguments in collection. We have effectively reduced the running time of apply method to linear time O(N) – proportional to number of arguments applied to the function.

This way we still provide the base functionality through default method implementation, though we can provide more specialized implementation through wrapper interface around the original lambda expression.

If you like to you can grab the code and see if you can find some interesting applications for running the variable argument function through functional interfaces.