Spring Cloud: Feign OAuth2 authentication

It’s worth to describe one additional use case for Spring Cloud Feign clients in microservice oriented architecture: authentication.

You may quite fast face the fact that your requests are being send across multiple services and that they may require to be aware of the user on behalf of whom the requests are being processed. Afterwards you can use that information to perform some user specific operations or simply perform authorization and verify if the user is permitted to perform specific actions.

Fortunately Spring Cloud Security module comes here with aid and whenever you use RestTemplate and OAuth2 authentication this information will be propagated with any remote call that you perform. The module will configure for you OAuth2RestTemplate that can be injected and used as normal RestOperations/RestTemplate. Providing that you had enable Spring’s OAuth2 context, which happens if you have enabled the resource server or enabled the OAuth2 client, using @EnableOAuth2Client.

Unfortunately this does not apply to your Feign clients, but we are going to change this through two simple steps. We are going to support OAuth Bearer token authentication.

First let’s define our custom RequestInterceptor.

public class OAuth2FeignRequestInterceptor implements RequestInterceptor {

    private static final String AUTHORIZATION_HEADER = "Authorization";

    private static final String BEARER_TOKEN_TYPE = "Bearer";

    private static final Logger LOGGER = LoggerFactory.getLogger(OAuth2FeignRequestInterceptor.class);

    private final OAuth2ClientContext oauth2ClientContext;

    public OAuth2FeignRequestInterceptor(OAuth2ClientContext oauth2ClientContext) {
        Assert.notNull(oauth2ClientContext, "Context can not be null");
        this.oauth2ClientContext = oauth2ClientContext;
    }

    @Override
    public void apply(RequestTemplate template) {

        if (template.headers().containsKey(AUTHORIZATION_HEADER)) {
            LOGGER.warn("The Authorization token has been already set");
        } else if (oauth2ClientContext.getAccessTokenRequest().getExistingToken() == null) {
            LOGGER.warn("Can not obtain existing token for request, if it is a non secured request, ignore.");
        } else {
            LOGGER.debug("Constructing Header {} for Token {}", AUTHORIZATION_HEADER, BEARER_TOKEN_TYPE);
            template.header(AUTHORIZATION_HEADER, String.format("%s %s", BEARER_TOKEN_TYPE,
                    oauth2ClientContext.getAccessTokenRequest().getExistingToken().toString()));
        }
    }
}

We are going to use here Spring OAuth2 OAuth2ClientContext, that will store request bound OAuth token details.

Next step: we need to register that in our application, using Spring Boot we can leverage auto configuration features and register our class in META-INF/spring.factories:

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
io.jmnarloch.spring.cloud.feign.OAuth2FeignAutoConfiguration

Finally the configuration itself:

@Configuration
@ConditionalOnClass({ Feign.class })
@ConditionalOnProperty(value = "feign.oauth2.enabled", matchIfMissing = true)
public class OAuth2FeignAutoConfiguration {

    @Bean
    @ConditionalOnBean(OAuth2ClientContext.class)
    public RequestInterceptor oauth2FeignRequestInterceptor(OAuth2ClientContext oauth2ClientContext) {
        return new OAuth2FeignRequestInterceptor(oauth2ClientContext);
    }
}

Basically that’s it. From now on every call that will be made will also go with Authorization header.

As always there is a Spring Cloud starter that you can use:

compile ('io.jmnarloch:feign-oauth2-spring-cloud-starter:1.0.0')

The source code is available at Github:

https://github.com/jmnarloch/feign-oauth2-spring-cloud-starter

Spring Cloud: Feign SPDY/HTTP2

I like to experiment with different things, so this time let’s see how we can use alternative transport protocols with our Feign clients.

Spring Cloud’s Feign clients by default work over either java.net API or HttpClient (since Spring Cloud 1.1) depending on the version and the settings, but we aren’t going to use any of those. Instead we are going to switch to Square’s OkHttp.

We do this because HttpClient does not support any alternative protocol except for HTTP 1.0/1.1 as the documentation of HttpClient 4.5.1 states:

Standards based, pure Java, implementation of HTTP versions 1.0 and 1.1

Though support for HTTP2 is planned to be added in future.

The OkHttp in contrary supports both SPDY and HTTP2 while still being able to use HTTP 1.1 to communicate with any “older” server that does not “speak” with latest transport protocols.

Feign setup

First let’s make sure we are using the latest Feign version:

configurations.all {
    resolutionStrategy {
        eachDependency { DependencyResolveDetails details ->
            if (details.requested.group == 'com.netflix.feign') {
                details.useVersion "8.10.1"
            }
        }
    }
}

Let’s setup our project and add the required dependencies:

compile ('org.springframework.cloud:spring-cloud-starter-feign:1.0.3.RELEASE')
compile ('com.netflix.feign:feign-core:8.10.1')
compile ('com.netflix.feign:feign-okhttp:8.10.1')

If you are using Spring Cloud 1.1 you will have to first disable the HttpClient support:

feign.httpclient.enabled=false

Now we need to configure Feign to work over OkHttp. This is matter only of proper configuring Spring’s bean.

    @Configuration
    @ConditionalOnClass(com.squareup.okhttp.OkHttpClient.class)
    public class OkHttpClientAutoConfiguration {

        @Autowired(required = false)
        private com.squareup.okhttp.OkHttpClient httpClient;

        @Bean
        @ConditionalOnMissingBean(OkHttpClient.class)
        public Client feignClient() {
            if (httpClient != null) {
                return new OkHttpClient(httpClient);
            }
            return new OkHttpClient();
        }
    }

If you are using Ribbon and discovery service you are going also need to configure the load balanced client. In the end you going to end up with setup like this:


@Configuration
@ConditionalOnClass({ com.squareup.okhttp.OkHttpClient.class, Feign.class, ILoadBalancer.class })
@ConditionalOnProperty(value = "feign.okhttp.enabled", matchIfMissing = true)
@AutoConfigureBefore(FeignAutoConfiguration.class)
public class OkHttpClientAutoConfiguration {

    @Autowired(required = false)
    private com.squareup.okhttp.OkHttpClient httpClient;

    @Resource(name = "cachingLBClientFactory")
    private LBClientFactory lbClientFactory;

    @Bean
    public Client feignClient() {
        RibbonClient.Builder builder = RibbonClient.builder();

        if (httpClient != null) {
            builder.delegate(new OkHttpClient(httpClient));
        } else {
            builder.delegate(new OkHttpClient());
        }

        if (lbClientFactory != null) {
            builder.lbClientFactory(lbClientFactory);
        }

        return builder.build();
    }
}

Alternativly, you may want to simply use the Spring Cloud starter that will do this for you:

https://github.com/jmnarloch/feign-okhttp-spring-cloud-starter

by importing it into your project:

compile ('io.jmnarloch:feign-okhttp-spring-cloud-starter:1.0.0')

Go Continuous Delivery: Health check plugin

This is another Go Continuous Delivery plugin to your build pipeline toolbelt. It’s name is quite descriptive so let’s discuss first it’s motivation. As you may know you can use Go not only for building your applications, but also for automatic deployments. Nevertheless you shouldn’t stop there yet. The Go build pipelines are ideal to orchestrate fully featured automated test workflows. So it’s not uncommon that after the deployment stages you will most likely configure your acceptance tests using for instance tool’s like Selenium or Protractor or similar. That will work on real setup with real data in addition to any unit or integration tests that you may had run durring the application build. The acceptance tests can be then immediately fallowed by stress tests or performence benchmarks using for example Gatling.

At my current project we had reach the stage when both the build and deployment was fully automatic, but any other pipeline stages require user interaction and manual triggering, mostly because there is delay between the point in time that the application was started and it becomes fully operational. Mostly because in mean time it may perform a lot of hard work: like establishing the database connections and migrating the database schema for instance. Registration in the discovery service and propagation of the registry among clients is also not instant. Due to all of this factors we had been running our tests after manually checking that the application has completed initalization and is ready to process requests.

But that is now history.

We had leverage the application health information in order to be able to tell whether it is in state that it can process the requests and we had added health checks to our build stages to automate monitoring them. Despite that this could be easly “implemnted” for instance in deployment script, this idea seemed like it had a large reusability potential. So is has been implemented as a Go plugin.

https://github.com/jmnarloch/gocd-health-check-plugin

gocd_healthcheck_screen

You can configure the health check task in any of your build stages. The task will delay the pipeline to the configured amount of seconds, in them same time polling your application health endpoint untill your application starts to report that is in “UP” state. If that does not happen a timeout will occur and the build will fail.

You will still need to adjust your application behaviour and define additional smart health checks, that will for instance await that your Zulu proxy (if you are using Spring Cloud and Zulu in particular) will be able to route your requests, but this can be done easly.

The implementation is reactive and uses RxJava/RxNetty under the hood, though intially it has been implemented in Project Reactor, it turn out that it may not work so well in Apache Felix the – OSGi container that is used by the Go internally.

Spring Cloud: Feign request/response compression

If you happen to read one of the previous posts you probably guessing that we had been using a lot of Netflix Feign in our projects. In fact we had created a couple of extensions for OAuth support, custom error decoders and one additional for handling GZIP compression of both requests and responses.

https://github.com/jmnarloch/feign-encoding-spring-cloud-starter

Yet another Spring Cloud starter that you need to simply drop to your project dependencies:

Whether you are using Maven

<dependency>
  <groupId>com.github.jmnarloch</groupId>
  <artifactId>feign-encoding-spring-cloud-starter</artifactId>
  <version>1.1.1</version>
</dependency>

or Gradle:

compile 'com.github.jmnarloch:feign-encoding-spring-cloud-starter:1.1.1'

Feign setup

Then you need to decide whether you want to support compression of your request, responses or both. You do this by annotating your configuration class with @EnableFeignAcceptGzipEncoding and @EnableFeignContentGzipEncoding.

@EnableFeignAcceptGzipEncoding
@EnableFeignContentGzipEncoding
@EnableFeignClients
@Configuration
public class Application {

}

Simple isn’t it?

This will register Feign interceptors that will enrich the outgoing requests and set the Accept-Encoding and Content-Encoding HTTP headers. You need to be aware that this still doesn’t make automaticaly every your request/response to be compressed. For instance the server might be configured in way that it’s going to compress only specific media types or response above specifc threshold length.

Corresponding Feign client side options exists:

feign.compression.min-request-size=2048 # the minimum request size
feign.compression.mime-types=text/xml,application/xml,application/json # the request media types

Allowing to specify which supported media types and request should use GZIP compression.

Disclaimer:

If you run this plugin with Spring Cloud Netflix 1.0.1 you will run into fallowing error:

feign.codec.DecodeException: Could not read JSON: Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens
 at [Source: java.io.PushbackInputStream@c6b2dd9; line: 1, column: 2]; nested exception is com.fasterxml.jackson.core.JsonParseException: Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens
 at [Source: java.io.PushbackInputStream@c6b2dd9; line: 1, column: 2]
    at feign.SynchronousMethodHandler.decode(SynchronousMethodHandler.java:150)
    at feign.SynchronousMethodHandler.executeAndDecode(SynchronousMethodHandler.java:118)
    at feign.SynchronousMethodHandler.invoke(SynchronousMethodHandler.java:71)
    at feign.ReflectiveFeign$FeignInvocationHandler.invoke(ReflectiveFeign.java:94)
    at com.sun.proxy.$Proxy64.getInvoices(Unknown Source)
    at com.github.jmnarloch.spring.cloud.feign.FeignAcceptEncodingTest.compressedResponse(FeignAcceptEncodingTest.java:64)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
    at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:73)
    at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:82)
    at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:73)
    at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:224)
    at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:83)
    ... 40 more
Caused by: org.springframework.http.converter.HttpMessageNotReadableException: Could not read JSON: Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens
 at [Source: java.io.PushbackInputStream@c6b2dd9; line: 1, column: 2]; nested exception is com.fasterxml.jackson.core.JsonParseException: Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens
 at [Source: java.io.PushbackInputStream@c6b2dd9; line: 1, column: 2]
    at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.readJavaType(AbstractJackson2HttpMessageConverter.java:208)
    at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.read(AbstractJackson2HttpMessageConverter.java:200)
    at org.springframework.web.client.HttpMessageConverterExtractor.extractData(HttpMessageConverterExtractor.java:97)
    at org.springframework.cloud.netflix.feign.support.SpringDecoder.decode(SpringDecoder.java:57)
    at org.springframework.cloud.netflix.feign.support.ResponseEntityDecoder.decode(ResponseEntityDecoder.java:40)
    at feign.SynchronousMethodHandler.decode(SynchronousMethodHandler.java:146)
    ... 45 more
Caused by: com.fasterxml.jackson.core.JsonParseException: Illegal character ((CTRL-CHAR, code 31)): only regular white space (\r, \n, \t) is allowed between tokens
 at [Source: java.io.PushbackInputStream@c6b2dd9; line: 1, column: 2]
    at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1419)
    at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:508)
    at com.fasterxml.jackson.core.base.ParserMinimalBase._throwInvalidSpace(ParserMinimalBase.java:459)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._skipWSOrEnd(UTF8StreamJsonParser.java:2625)
    at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:645)
    at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3105)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3051)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2221)
    at org.springframework.http.converter.json.AbstractJackson2HttpMessageConverter.readJavaType(AbstractJackson2HttpMessageConverter.java:205)
    ... 50 more

To fix this you will have to enable Feign HttpClient support. Fallow the instruction: https://github.com/jmnarloch/feign-encoding-spring-cloud-starter#know-issues on how to do this.
In Spring Cloud Netflix 1.1 this problem does not exist.

Sneak peaking Spring Cloud 1.1

Starting from Spring Cloud 1.1, you will have control over the request/response compression and be able to enable it by specifying one of the properties:

feign.compression.request.enabled=true
feign.compression.response.enabled=true

While in case of responses you entirely have to rely on the server settings (the server may not support payload compression at all, or may compress only selected kind of responses). On contrary on the client side you have more fine grained control over which requests can be compressed, by specifying the supported media types and threshold content length.

Note: request compression enforces that your server must understand the Content-Encoding header and be able to decompress the incoming request, therefor it’s advise to use this feature with caution (it’s perfect for system internal communication where you have full control over each individual subsystem).

Go Continuous Delivery: Gradle plugin 1.0.0 released

I’ve been working on updating the Go Continuous Delivery plugin internalls so that I could release it as final “1.0” version.

On the surface not much have changed, the UI remains the same, the functionality also remains the same. What have really changed is the internal implementation, that no longer uses the Go Java API (which has been deprecated now) and instead completly relays on the latest JSON API. That required a lot of changes and refactoring, I’ve in fact created a helper utility library that wraps the low level API calls and gives a simple extension points where you can “put” your own plugin logic.

The plugin wrapper has repo on it’s own rights:

https://github.com/jmnarloch/gocd-task-plugin-api

To use the Gradle plugin, just download one of the releases and copy it into:

$GO_SERVER_HOME/plugins/external

After restaring the server you should be able to see the Gradle plugin on the Plugin list and also be able to add the Gradle task to your pipeline stages.

go-cd-gradle-task

Reguirements:
Go Continuous Delivery 14.4+
Java 7+

Spring Cloud: Feign Vnd.error decoder

I’ve already spoiled this extension in one of the previous posts, but essentially I had made a custom Feign error decoder that co ops with Spring Cloud and will automatically handle any Vnd.error returned from remote service calls.

The extension is preatty much simple, you just drop it into your classpath:

Example pom.xml

<dependency>
  <groupId>com.github.jmnarloch</groupId>
  <artifactId>feign-vnderror-spring-cloud-starter</artifactId>
  <version>1.1.1</version>
</dependency>

or your build.gradle

compile 'com.github.jmnarloch:feign-vnderror-spring-cloud-starter:1.1.1'

From now on every Vnd.error recieved through Feign call will be automatically unmarshalled and used for populating VndErrorException giving you more structurized access to the error information.

The project source code is available at Github: https://github.com/jmnarloch/feign-vnderror-spring-cloud-starter

Server side setup is also really simple, all you need to do is define custom Spring @ExceptionHandler and build the VndError object and return it as your response:

@ExceptionHandler
public ResponseEntity error(Exception ex) {

    final VndError vndError = new VndError(RequestCorrelationUtils.getCurrentCorrelationId(), ex.getMessage());

    return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
            .header(HttpHeaders.CONTENT_TYPE, "application/vnd.error+json")
            .body(vndError);
}

You may ask yourself what is the reason of using Vnd.error in the first place? The first and far most gain is that you adapt a unified approach for representing your errors within your system which becomes more important if you expose the API for public use.

Hibernate 5 with Java 8

This is going to be a fallow up to the very first posts in which I introduced the small Hibernate wrapper.

This time I have released it’s updated version that compiles against just released Hibernate 5:

https://github.com/jmnarloch/hstreams/tree/hstreams-5

The changes has been rather cosmetic, although the Hibernate 5 brings one major change, it natively supports Java 8 types (mostly JSR-310 date time API – like LocalDate, LocalTime etc.) on the entity mapping level, so you no longer need extensions like Jadira in order to facilitate them.

The wrapper itself will add the syntatic sugar for Java 8 lambda like queries, so you will be able to fully laverage all of Java 8 features.

Spring Cloud: Zuul error handling

Have you ever bothered to understand how Spring Cloud Zuul proxy behaves in case of error? If everything works fine, your successful or errored request gets proxied, but what in cases that Zuul doesn’t have definition for specific service id, the connection itself can’t be established or the hystrix circut is opened? There are some edge cases in Zuul implementation that it is worth to know about.

Let’s see what happens if your Zuul proxy can not access the service from the discovery client. Simply because no service has been registered, or because all of the nodes are in DOWN state – for some reason. You might be suprised by the fact that when an error will be trigger in one of Zuul filters, it won’t be handled by your Spring registered exception handlers. Instead the exception will be propagated through servlet context up to your ‘/error’ mapped handler (though I had observed incosistent default behavior across different servlet containers). So if you run your app on Tomcat you might see an error page completly similar to the one below.

zuul_tomcat

Everything like you would expect? An HTTP 500 error status and default page that exposes the application stack trace to the client.

Let’s redo this “test” on Undertow (1.2.12), this time we have empty page and HTTP 200 – Success status. (This was actual cause of notorius error. Let’s assume that the the user just POST-ed “Login” information to Zuul proxied backend and recieved 200 due to error in forwarding. Your would rather not want that to happen in production.)

zuul_undertow

Both of the behaviours are far from perfect. What if only what you wanted to do is perform AJAX call through Zuul and this behaviour brakes you API? What if you coded your client with error handling logic that in cases of error would expect a JSON payload with the error details?

So let’s see how we can fix this. The Zuul error handling is being handled through RibbonRoutingFilter and SendErrorFilter and will forward any errored request to ${error.path}, which defaults to ‘/error’. In case you relay on the defaults that will be handled by Spring Boot’s BasicErrorController. You can overide this behaviour and implement your own ErrorController. Let’s try with one assumption, since we using Zuul reverse proxy preaty much all of our application is only serving static content (scripts, css, html etc.) so we only will be interested in returning an error to our frontent in easy to understand form. For this purpose let’s create a Vnd.error representation of the error.

Here is how we do this:

@Controller
public class VndErrorController implements ErrorController {

    @Value("${error.path:/error}")
    private String errorPath;
    
    @Override
    public String getErrorPath() {
        return errorPath;
    }

    @RequestMapping(value = "${error.path:/error}", produces = "application/vnd.error+json")
    public @ResponseBody ResponseEntity error(HttpServletRequest request) {

        final String logref = RequestCorrelationUtils.getCurrentCorrelationId();
        final int status = getErrorStatus(request);
        final String errorMessage = getErrorMessage(request);
        final VndError error = new VndError(logref, errorMessage);
        return ResponseEntity.status(status).body(error);
    }

    private int getErrorStatus(HttpServletRequest request) {
        Integer statusCode = (Integer)request.getAttribute("javax.servlet.error.status_code");
        return statusCode != null ? statusCode : HttpStatus.INTERNAL_SERVER_ERROR.value();
    }

    private String getErrorMessage(HttpServletRequest request) {
        final Throwable exc = (Throwable) request.getAttribute("javax.servlet.error.exception");
        return exc != null ? exc.getMessage() : "Unexpected error occurred";
    }
}

You may notice the RequestCorrelationUtils.getCurrentCorrelationId method from the previous post. What we done here is implementation of custom error handling controller that creates a VndError based on the servlet error attributes and returns it as response payload.

Undertow setup

Undertow is bit different and will require special handling. By default it dissalows handling request wrappers and throws exception in case that there are used. You need to configure it correctly in order to use it with Zuul.

    @Bean
    public UndertowEmbeddedServletContainerFactory embeddedServletContainerFactory() {
        UndertowEmbeddedServletContainerFactory factory = new UndertowEmbeddedServletContainerFactory();
        factory.addDeploymentInfoCustomizers(new UndertowDeploymentInfoCustomizer() {
            @Override
            public void customize(DeploymentInfo deploymentInfo) {
                deploymentInfo.setAllowNonStandardWrappers(true);
            }
        });
        return factory;
    }

We need to customize our deployment and set allow-non-standard-wrappers to true.

Java partial function application

I’ve have made a small experiment with Java 8 lambda features and as a result I’ve created this small utility library that implements pretty simple partial function application paradigm. It won’t be as much flexible as in other languages (like Scala) since the implementation is library based and Java does not have any language level support for this. So the constraint at this moment are that the function can take up to 5 different arguments.

Let’s see the code:

Providing that we have a simple matematic function:

public int substract(int a, int b) {
    return a - b;
}

We can use the method reference, assing it to functional interface and invoke it:

int sum = Partial.function(this::substract)
        .apply(1, 2);

The same effect we can achieve by applying arguments one by one and invoke the function at the end:

int sum = Partial.function(this::substract)
        .arg(1)
        .arg(2)
        .apply();

It’s even possible to apply the arguments from “right to left”:

int sum = Partial.function(this::substract)
        .rarg(2)
        .rarg(1)
        .apply();

What is noticable, and I wasn’t fully aware of this, is that you can assign varargs method reference to a functional interface having any number of arguments. Below are all valid assigments:

Function f = Functions::funcVarArgs;
Function1<String, String> f1 = Functions::funcVarArgs;
Function2<String, String, String> f2 = Functions::funcVarArgs;

You don’t have to use the Partial and directly assing the lambda or method reference to one of the Function interfaces. Although the Partial class creates a nice form of DSL for invoking the method in place.

The partial applications are staticly typed, example:

public String url(String scheme, String host, int port, String username, String repo) {
        return String.format("%s://%s:%d/%s/%s", scheme, host, port, username, repo);
}

String url = Partial.function(this::url)
                .arg("https")
                .arg("github.com")
                .arg(443)
                .arg("jmnarloch")
                .apply("funava");

The source code is available at the Github:

https://github.com/jmnarloch/funava

Spring Cloud: Request correlation

Sometimes developing might seem like reinventing the wheel and this post partially might look so, since it does not invents nothing new, just provides an implementation for the common problem – request correlation.

It’s truism to say that in microservice environment the HTTP (apart from any other data that is being exchanged like messages) requests might be propagated and processed by multiple individual services. It becomes not trivial task to trace those and you need additional information to be able to track them. Yet simple and proven method is to pass the unique identifier with each individual request. There are various approaches and no unified header exists for this approach. Across web you will be able find solutions that use X-Request-Id, X-Trace-Id or X-Correlation-Id. Those values are always bound to the upstream request through HTTP headers and in most cases also internally by each process to the currently processed thread.

Exactly the same approach we have been using in our project, through simple implementation:

https://github.com/jmnarloch/request-correlation-spring-cloud-starter

That integrates seamlessly with Spring Cloud and gives you fallowing:

  • Generation of request correlation id for any inbound request to the system through one of the edge gateway services.
  • Propagation on the requests identifier internally.

What it does exactly?

The extension adds a servlet filter that process any incoming request and populates it with unique identifier. Next you will need means to propagate those with any outgoing request. The extension approaches this issue with out of the box support for fallowing use cases:

  • RestTemplate – an interceptor is being registered to any RestTemplate bean
  • Feign clients – similary proper request intercepor exists for Feign
  • Zuul proxy routes – those will also propagate the request identifier

It configures proper request interceptors for the above or uses tweaks to the request itself to be able to transparently propagate this information.

Our applications:

  • Logging request id as MDC field.
  • Request tracing (though not as complex as Zipkin’s)
  • Vnd.errors logref population
  • Spring Boot Actuator auditing

Let’s make a workable example:

Let’s say that you want to aggregate all of your logs into central storage and used that later for analysis. We can use for this purpose Logstash with ElasticSearch and Kibana dashboard for visualization.

It turns out that there is a very simple way to configure Logstash within Spring Boot through logback logstash encoder. We had end up with adding a logstash.xml file in one of our utility modules that afterwards is being imported by the application modules. This looks as fallows:

<?xml version="1.0" encoding="UTF-8"?>

<included>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>

    <property name="FILE_LOGSTASH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}/}spring.log}.json"/>
    <appender name="LOGSTASH" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <encoder>
            <pattern>${FILE_LOG_PATTERN}</pattern>
        </encoder>
        <file>${FILE_LOGSTASH}</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <fileNamePattern>${FILE_LOGSTASH}.%i</fileNamePattern>
        </rollingPolicy>
        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <MaxFileSize>10MB</MaxFileSize>
        </triggeringPolicy>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <includeCallerInfo>true</includeCallerInfo>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="LOGSTASH"/>
    </root>
</included>

As you may notice the file itself imports the Spring Boot logback configuration so you can say that it extends it by adding additional appender.

Later on any application module can use this predefined configuration by imporinting the above file in it’s logback.xml configuaration.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <include resource="com/commons/logging/logback/logstash.xml"/>
</configuration>

Next we need to configure logstash to read the JSON encoded log file that LogstashEncoder will produce. To do that let’s setup central logstash deamon node and on every application node let’s use simple logstash forwarder – that is lighter and consumes less resources.

Logstash forwarder configuration may look like fallows:

{
    "network": {
        "servers": [
            "logstash.local:5043"
        ],
        "ssl certificate": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
        "ssl key": "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key",
        "ssl ca": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
        "timeout": 15
    },
    "files": [
        {
            "paths": [
                "${ENV_SERVICE_LOG}/*.log.json"
            ],
            "fields": {
                "type": "${ENV_SERVICE_NAME}"
            }
        }
    ]
}

Finally we need our logstash configuration that will listen to logstash forwarder connections and process the input to finally persist them in ElasticSearch:

input {
    lumberjack {
        port => 5043

        ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt"
        ssl_key => "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key"
    }
}
filter {
    json {
        source => "message"
    }
}
output {
    elasticsearch { host => "elasticsearch.local" }
}

What we have gained by that?
At this point logs from entire system are being stored in central server where they are indexed and analyzed by ElasticSearch. We can perform queries against them and use Kibana visualization features to display in form of chart to see for instance how they fluctuate over time.

If this is does not yet convince you, let’s see how we can handle errors now. If you configure your error handler to return Vnd.error and populate it’s logref with the request correlation id, the client might receive fallowing error in form of JSON response:

{“logref”:”c1bd6562-b28e-497e-9b49-1b4a4a106fe0″,”message”:”status 401 content:\n{\”error\”:\”unauthorized\”,\”error_description\”:\”Full authentication is required to access this resource\”}”,”links”:[]}

We can use logref value to perform the search in Kibana and find all the logs corresponding to the request that were performed across any service with for instance ERROR log level.

kibana_correlation_id