AWS EventBridge Pattern DSL

EventBridge had been lately updated with set of new pattern matching capabilities that had been nicely captured in the blog post by James Beswick. The existing functionality of matching the event based on exact equality of certain fields has now been expanded to support operations like prefix matches, not matches or matching base on a presence of certain attributes. This altogether helps to pre filter the events based on specific criteria, that previously was not only achievable by adding the needed boilerplate logic to your events consumer and dropping all events that are not meeting certain criteria.

Even before the pattern matching syntax had expanded with additional keywords and operators it was not uncommon to mis configure your event pattern by specifying wrong attribute name or pattern syntax, no matter whether use with AWS CLI, CloudFormation or through AWS Console.

To avoid this scenario and first of all make the creation of the Rule pattern more bulletproof experience as well as allow it to be fully reproducible and testable I had create a very small utility Java library that introduce DSL for EventBridge pattern language.

You can add the library to your existing project by simply dropping it to your Maven (or Gradle) project.

<dependency>
  <groupId>io.jmnarloch</groupId>
  <artifactId>aws-eventbridge-pattern-builder</artifactId>
  <version>1.0.0</version>
</dependency>

The utility is really simple to use. It defines a `EventsPattern` with flow API for defining the constraints of the matched events.

As an example a pattern that matches an event published by aws.ec2 that would be expressed in JSON like:

{
  "source": [ "aws.ec2" ]
}

Can be turn into small Java snippet.

EventsPattern.builder()
        .equal("source", "aws.ec2")
        .build();

Once a pattern had been created it can be turn into JSON simply by calling `toJson` method.

The newly supported operators are also supported.

Prefix matching
For instance if we wish to match events from specifying AWS regions we can create a pattern.

EventsPattern.builder()
        .prefix("region", "eu-")
        .build();

Will corespond to JSON syntax

{
  "region": [{"prefix": "eu-"}]
}

Anything but
Can be used to match everything except certain value.

EventsPattern.builder()
        .path("detail")
                .anythingBut("state", "initializing")
                .parent()
        .build();

Will result in.

{
  "detail": {
    "state": [ {"anything-but":"initializing"} ]
  }
}

AWS SDK

The above examples can be used as drop replacement whenever using the vanilla AWS SDK.

EventsPattern pattern = EventsPattern.builder()
        .equal("source", "aws.ec2")
        .build();

AmazonCloudWatchEvents eventsClient = AmazonCloudWatchEventsClient.builder().build();
eventsClient.putRule(new PutRuleRequest()
        .withName("EC2-Rule")
        .withRoleArn("arn:aws:iam::123456789012:role/ec2-events-delivery-role")
        .withEventPattern(pattern.toJson()));

Testability

The biggest benefit for moving the rule to the code would be arguable the possibility to test the rule even before it’s deployed. For that purpose a dedicated test could be added that would be part of the integration test suite.

    @Test
    public void patternShouldMatchEvent() {

        String event = "{\"id\":\"7bf73129-1428-4cd3-a780-95db273d1602\",\"detail-type\":\"EC2 Instance State-change Notification\",\"source\":\"aws.ec2\",\"account\":\"123456789012\",\"time\":\"2015-11-11T21:29:54Z\",\"region\":\"us-east-1\",\"resources\":[  \"arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111\"  ],\"detail\":{  \"instance-id\":\"i-abcd1111\",  \"state\":\"pending\"  }}";

        EventsPattern pattern = EventsPattern.builder()
                .equal("source", "aws.ec2")
                .build();

        AmazonCloudWatchEvents eventsClient = AmazonCloudWatchEventsClient.builder().build();
        TestEventPatternResult matchResult = eventsClient.testEventPattern(new TestEventPatternRequest()
                .withEvent(event)
                .withEventPattern(pattern.toJson()));
        assertTrue(matchResult.getResult());
    }

Now each time the event structure itself changes there is regression test in place that can guarantee that the created pattern will still be matching the event. Useful during early development stages or for prototyping purposes.

AWS CDK

AWS CDK is a development framework that allows you to define your AWS infrastructure as a code. At the moment the CDK does not yet support the full syntax of the pattern matching, but the ultimate goal would be to introduce such capacity in the CDK directly.

The project is open source and available on Github repo.

For additional information on the event matching capabilities please refer to the EventBridge documentation.

Automatically discovering SNS message structure

Last month during Re:Invent preview of EventBridge Schema Registry had been announced. One of unique feature that the service brings is the automatic discovery of any custom event publish to EventBridge. As soon as the discovery will be enabled on Event Bus with the service will aggregate the event over period of time and register them as OpenAPI document into the registry. You might ask yourself what is exactly the use case in which you will need a feature like that? Typically the discovery of event will be particularly useful whenever you don’t own the source that publishes the events typically when it’s owned by other team in you organization or even a third party. Even for consuming the AWS EventBridge events prior to the release of the service typical onboarding process required fallowing the contract of the events defined through the AWS documentation and mapping that into language of choice. That could be really tedious.

As for today the service allows integration with EventBridge, though it is definitely not limited to it. In matter of fact it will be able to easy integrate with any JSON base event source. To demonstrate that capabilities with little amount of code we can demonstrate how to automate the discovery of schema from any SNS topic.

To achieve that a little bit of wiring needs to be done to pipe the events from SNS to EventBridge. Unfortunately EventBridge is not a natively supported target for SNS at the moment. Instead we can use AWS Lambda to consume the event from SNS and forward it to EventBridge.

This leads us to very straight forward integration.

AWS To wire everything together we can create a SAM template.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
  aws-schema-discovery-sns

  Sample SAM Template for aws-schema-discovery-sns

Globals:
  Function:
    Timeout: 60

Resources:
  SnsDiscoveryTopic:
    Type: AWS::SNS::Topic
    Properties:
      TopicName: 'sns-discovery'

  SnsSchemaDiscoveryFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: app/
      Handler: app.lambdaHandler
      Runtime: nodejs12.x
      Environment:
        Variables:
          event_source: sns.topic
          event_detail_type: !GetAtt SnsDiscoveryTopic.TopicName
      Events:
        SnsEvents:
          Type: SNS
          Properties:
            Topic: !Ref SnsDiscoveryTopic
      Policies:
        - Statement:
            - Sid: EventBridgePutEventsPolicy
              Effect: Allow
              Action:
                - events:PutEvents
              Resource: '*'
Outputs:
  SnsSchemaDiscoveryFunction:
    Description: 'SNS Schema Discovery function Lambda Function ARN'
    Value: !GetAtt SnsSchemaDiscoveryFunction.Arn
  SnsSchemaDiscoveryFunctionIamRole:
    Description: 'Implicit IAM Role created for SNS Schema Discovery function'
    Value: !GetAtt SnsSchemaDiscoveryFunctionRole.Arn

This will deploy through CloudFormation a Lambda function that will be subscribed to dedicated SNS topic. Publishing any message to the SNS will forward it to EventBridge and providing that the schema discovery had been enabled on the default Event Bus auto discovered schema will be registered.

Next thing is to develop the Lambda code. To keep things since let’s use JavaScript for that.

const AWS = require('aws-sdk');
exports.lambdaHandler = async (event, context) => {
    try {
        var eventbridge = new AWS.EventBridge();
        var entries = [];
        event.Records.forEach(record => entries.push({
            Source: process.env.event_source,
            DetailType: process.env.event_detail_type,
            Detail: record.Sns.Message,
            Time: new Date()
        }));
        return eventbridge.putEvents({
            Entries: entries
        }).promise();
    } catch (err) {
        console.log(err, err.stack);
        throw err;
    }
};

The code is very simple. It will deliver any event published to SNS back to EventBridge. It uses the configured source and detail-type attributes to identify this particular event.

The last two things is to enable the discovery on the EventBus. This can be done from Console, CloudFormation or using CLI command.

aws schemas create-discoverer --source-arn arn:aws:events:us-east-1:${ACCOUNT_ID}:event-bus/default

Now by publishing an event to SNS topic we will get automatic registered schema.

To demonstrate that everything is working let’s publish a sample JSON document into the SNS topic.

{
  "Records": [{
    "AwsRegion": "us-east-1",
    "AwsAccountId": "12345678910",
    "MessageId": "4e4fac8e-cf3a-4de3-b33e-e614fd25c66f",
    "Message": {
      "instance-id":"i-abcd1111",
      "state":"pending"
    }
  }]
}

The above event results in fallowing OpenAPI document created in the registry:

{
  "openapi": "3.0.0",
  "info": {
    "version": "1.0.0",
    "title": "SnsDiscovery"
  },
  "paths": {},
  "components": {
    "schemas": {
      "AWSEvent": {
        "type": "object",
        "required": ["detail-type", "resources", "detail", "id", "source", "time", "region", "version", "account"],
        "x-amazon-events-detail-type": "sns-discovery",
        "x-amazon-events-source": "sns.topic",
        "properties": {
          "detail": {
            "$ref": "#/components/schemas/SnsDiscovery"
          },
          "account": {
            "type": "string"
          },
          "detail-type": {
            "type": "string"
          },
          "id": {
            "type": "string"
          },
          "region": {
            "type": "string"
          },
          "resources": {
            "type": "array",
            "items": {
              "type": "object"
            }
          },
          "source": {
            "type": "string"
          },
          "time": {
            "type": "string",
            "format": "date-time"
          },
          "version": {
            "type": "string"
          }
        }
      },
      "SnsDiscovery": {
        "type": "object",
        "required": ["Records"],
        "properties": {
          "Records": {
            "type": "array",
            "items": {
              "$ref": "#/components/schemas/SnsDiscoveryItem"
            }
          }
        }
      },
      "SnsDiscoveryItem": {
        "type": "object",
        "required": ["AwsRegion", "Message", "AwsAccountId", "MessageId"],
        "properties": {
          "Message": {
            "$ref": "#/components/schemas/Message"
          },
          "AwsAccountId": {
            "type": "string"
          },
          "AwsRegion": {
            "type": "string"
          },
          "MessageId": {
            "type": "string"
          }
        }
      },
      "Message": {
        "type": "object",
        "required": ["instance-id", "state"],
        "properties": {
          "instance-id": {
            "type": "string"
          },
          "state": {
            "type": "string"
          }
        }
      }
    }
  }
}

The source code of the SAM application is available at Github repo. Feel free to try it out.

Spring Boot RxJava 2

Last month the RxJava 2 GA version has been released: https://github.com/ReactiveX/RxJava/releases/tag/v2.0.0

The project has been reworked to support the emerging JVM standard: Reactive Streams

Thanks to contribution from Brian Chung the small side project that I have initially authored: https://github.com/jmnarloch/rxjava-spring-boot-starter that adds support for returning the reactive types: Observable and Single from Spring MVC controllers has now support for RxJava2.

While Spring itself will support Reactive Streams mostly through it’s own project Project Reactor. RxJava still will have various support through different project. For instance the latest Spring Data project will allow to design the repositories with build in support for RxJava types

From the API level is the most significant change of the RxJava Spring Boot starter is the package change that nows support types from io.reactivex.* instead of rx.*. Besides that the usage is fairly similar.

Simply add the library to your project:

<dependency>
  <groupId>io.jmnarloch</groupId>
  <artifactId>rxjava-spring-boot-starter</artifactId>
  <version>2.0.0</version>
</dependency>

You can use the RxJava types as return types in your controllers:

@RestController
public static class InvoiceResource {

    @RequestMapping(method = RequestMethod.GET, value = "/invoices", produces = MediaType.APPLICATION_JSON_UTF8_VALUE)
    public Observable<Invoice> getInvoices() {

        return Observable.just(
                new Invoice("Acme", new Date()),
                new Invoice("Oceanic", new Date())
        );
    }
}

If you looking for more detail description of migrating to RxJava2, here is a comprehensive guide.

Spring Cloud Stream: Hermes Binder

Introduction

Spring Cloud Stream is a interesting initiative for building message driven application in the widely considered Spring ecosystem. I think that the main idea is ease the usage and configuration to the bare minimum compared to more complex solution which the Spring Integration apparently is.

Altogether Spring Cloud Stream introduces the idea of binders, which are responsible for handling the integration between different MOM at the moment having out of the support for:

  • RabbitMQ
  • Kafka
  • Redis
  • GemFire

For additional information I highly recommend going through the Spring Cloud Stream reference guide.

Allegro Hermes is message broker build on top Kafka with REST API allowing to easily be integrated by HTTP based clients. It also has a rich set of features allowing to pass JSON and binary AVRO messages as well as broadcasting the messages or sending them in batches.

In order to be able to consume it through Spring Cloud Stream we need to provide a dedicated binder that will be able to connect the messages to Hermes.

Fortunately there is one here:

https://github.com/jmnarloch/hermes-spring-cloud-starter-stream

Example:

Let’s try to use it in practice, starting from sample project. You may want to first go through the Hermes quickstart guide to set up your environment.

Next we will download Spring Initializr template using httpie.


$ http -f POST https://start.spring.io/starter.zip type=gradle-project style=cloud-stream-binder-kafka > demo.zip

$ unzip demo.zip

Afterwards you can import the project using your favorite IDE.

The first is to do is to replace the spring-cloud-starter-stream-kafka with hermes binder:


compile('io.jmnarloch:hermes-spring-cloud-starter-stream:0.2.0')

Let’s start by configuring the Hermes URI for the binder.

spring:
  cloud:
    stream:
      hermes:
        binder:
          uri: 'http://frontend.hermes.local:8080'

Now we can design our binding and the POJO used for the message.

package io.jmnarloch.stream.hermes;

import java.math.BigDecimal;
import java.util.UUID;

public class PriceChangeEvent {

    private final UUID productId;

    private final BigDecimal oldPrice;

    private final BigDecimal newPrice;

    public PriceChangeEvent(UUID productId, BigDecimal oldPrice, BigDecimal newPrice) {
        this.productId = productId;
        this.oldPrice = oldPrice;
        this.newPrice = newPrice;
    }

    public UUID getProductId() {
        return productId;
    }

    public BigDecimal getOldPrice() {
        return oldPrice;
    }

    public BigDecimal getNewPrice() {
        return newPrice;
    }
}

And binding for the message channel.

package io.jmnarloch.stream.hermes;

import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;

public interface Events {

    @Output
    MessageChannel priceChanges();
}

Through configuration we can specify, the destination topic name and the default content type of the topic.

spring:
  cloud:
    stream:
      bindings:
        priceChanges:
          destination: 'io.jmnarloch.price.change'
          contentType: 'application/json'

In order to enable Spring Cloud Stream binding we need to annotate our configuration class.

@Configuration
@EnableBinding(Events.class)
public class EventsConfiguration {
}

Using the binding is straightforward, a proper proxy is going to be created and can be afterwards injected.

@Component
public class EventsProducer {

    private final Events events;

    @Autowired
    public EventsProducer(Events events) {
        this.events = events;
    }

    public void publishPriceChange(PriceChangeEvent event) {

        events.priceChanges().send(new GenericMessage<>(event));
    }
}

Finally, we can publish our message:

eventsProducer.publishPriceChange(new PriceChangeEvent(uuid, oldPrice, newPrice));

At the moment the binder itself is still under development, but yet this presents the workable example.

Publishing AVRO binary messages is almost as simple as the JSON ones and I’m going to cover that in fallowing blog post.

Spring Boot: Hystrix and ThreadLocals

Foreward

Last time I have described a quite useful, at least from my perspective extensions for RxJava, but overall it defined only a syntactic sugar so that you could easily specify your custom RxJava Scheduler. One of the mentioned applications was very relevant to this blog post and it’s was related to be able to pass around ThreadLocal variables. As in case of RxJava whenever we will spawn a new thread, through subscribing to Scheduler, it’s going to lose any context that was stored within the ThreadLocal variables of the “outer” thread that initiated the task.

The same applies to Hystrix commands.

Initially, the credit for this idea should go to the development team back in my previous company – Allegro Tech, but it so much recurring problem that others has solve it in the past. Yet again I had the need to solve it once again.

Let’s say that I would like to execute the fallowing command, lets put a side for a moment the sense of doing so, only to illustrate the problem:

new HystrixCommand<Object>(commandKey()) {
    @Override
    protected Object run() throws Exception {
        return RequestContextHolder.currentRequestAttributes().getAttribute("RequestId", SCOPE_REQUEST);
    }
}.execute();

Puff – the data is gone.

Even when run in server container in “context” of request bound thread the above code will end with exception. This happens because Hystrix by default will spawn a new thread for executing the code, a side from the Semaphore mode that can be also used. Hystrix manages it’s own thread pools for the commands which will have no relocation to the any context stored in ThreadLocal of the triggering thread.

Overall ThreadLocal variables might be considered as anti-pattern by some, but it’s really so useful in many practical scenarios that it’s really not such uncommon that quite a few libraries depend on those.

Typically your logging MDC context or in case Spring Framework the security Authentication/Principal or the request/session scoped beans etc. So it quite important for some use cases to be able to correctly pass such information. Imagine a typical use case that you are trying to use OAuth2RestTemplate  in @HystrixCommand annotated method. Sadly this isn’t going to work.

The solution

Fortunately the designers of Hystrix library have anticipated such use case and designed the proper extension points. Basically the idea is to enable to decorate the executing task with your own logic, once it’s going to invoked by the thread.

On top of that I’ve prepared a small Spring Boot integration module:

https://github.com/jmnarloch/hystrix-context-spring-boot-starter

At this point the implementation is fairly simple and a bit limited. In order to pass the specific thread bounded value you need to provided a your custom implementation of HystrixCallableWrapper.

For instance to “fix” the above snippet we can register as a bean fallowing class:

@Component
public class RequestAttributeAwareCallableWrapper implements HystrixCallableWrapper {

    @Override
    public <T> Callable<T> wrapCallable(Callable<T> callable) {
        return new RequestAttributeAwareCallable<>(callable, RequestContextHolder.currentRequestAttributes());
    }

    private static class RequestAttributeAwareCallable<T> implements Callable<T> {

        private final Callable<T> callable;
        private final RequestAttributes requestAttributes;

        public RequestAttributeAwareCallable(Callable<T> callable, RequestAttributes requestAttributes) {
            this.callable = callable;
            this.requestAttributes = requestAttributes;
        }

        @Override
        public T call() throws Exception {

            try {
                RequestContextHolder.setRequestAttributes(requestAttributes);
                return callable.call();
            } finally {
                RequestContextHolder.resetRequestAttributes();
            }
        }
    }
}

Adding Java 8 syntactic sugar

It quite soon struck me that this is rather boilerplate implementation, because for every variable that we would like to share we would have to implement pretty much similar class.

So why not to try a bit different approach, simply create a “template” implementation that could be conveniently filled with specific implementation at the the defined “extension” points. Considerably the Java 8 method reference could be quite useful here, mostly because in typical scenario the operations that would be performed would be rather limited to: retrieving value, setting it and finally clearing any track of it.

public class HystrixCallableWrapperBuilder<T> {

    private final Supplier<T> supplier;

    private Consumer<T> before;

    private Consumer<T> after;

    public HystrixCallableWrapperBuilder(Supplier<T> supplier) {
        this.supplier = supplier;
    }

    public static <T> HystrixCallableWrapperBuilder<T> usingContext(Supplier<T> supplier) {
        return new HystrixCallableWrapperBuilder<>(supplier);
    }

    public HystrixCallableWrapperBuilder<T> beforeCall(Consumer<T> before) {
        this.before = before;
        return this;
    }

    public HystrixCallableWrapperBuilder<T> beforeCallExecute(Runnable before) {
        this.before = ctx -> before.run();
        return this;
    }

    public HystrixCallableWrapperBuilder<T> afterCall(Consumer<T> after) {
        this.after = after;
        return this;
    }

    public HystrixCallableWrapperBuilder<T> afterCallExecute(Runnable after) {
        this.after = ctx -> after.run();
        return this;
    }

    public HystrixCallableWrapper build() {
        return new HystrixCallableWrapper() {
            @Override
            public <V> Callable<V> wrapCallable(Callable<V> callable) {
                return new AroundHystrixCallableWrapper<V>(callable, supplier.get(), before, after);
            }
        };
    }

    private class AroundHystrixCallableWrapper<V> implements Callable<V> {

        private final Callable<V> callable;

        private final T context;

        private final Consumer<T> before;

        private final Consumer<T> after;

        public AroundHystrixCallableWrapper(Callable<V> callable, T context, Consumer<T> before, Consumer<T> after) {
            this.callable = callable;
            this.context = context;
            this.before = before;
            this.after = after;
        }

        @Override
        public V call() throws Exception {
            try {
                before();
                return callable.call();
            } finally {
                after();
            }
        }

        private void before() {
            if (before != null) {
                before.accept(context);
            }
        }

        private void after() {
            if (after != null) {
                after.accept(context);
            }
        }
    }
}

Above code is not part of the described extension, but you may use it freely as you wish.

Afterwards we may instantiate as many wrappers as we would like to:

HystrixCallableWrapperBuilder
                .usingContext(RequestContextHolder::currentRequestAttributes)
                .beforeCall(RequestContextHolder::setRequestAttributes)
                .afterCallExecute(RequestContextHolder::resetRequestAttributes)
                .build();

As a result we would be able to pass for instance MDC context or Spring Security Authentication or any other data that would be needed.

Separation of concerns

This is clearly a cleaner solution, but still has one fundamental drawback: it requires to specify the logic for every single ThreadLocal variable separately. It would be way more convenient to have only  define the logic of passing the variables across boundaries of threads between the Hystrix or any other library like for instance RxJava. The only catch is that in order to do so those variables would have be first identified and encapsulated in proper abstraction.

I think such idea would be worth implementing, though I don’t have yet a complete solution for it.

Plans

Nevertheless I would be interested in developing PoC of such generic solution and as done in the past, once again prepare a pull request for instance to Spring Cloud to provide such end to end functionality.

Spring Boot: RxJava Declarative Schedulers

As a fallow up to the last weeks article: Spring Boot: RxJava there is one additional project:

https://github.com/jmnarloch/rxjava-scheduler-spring-boot-starter

Setup as with most Spring Boot starters is fairy simple you just drop the dependency to your project classpath and you are all set:


<dependency>
  <groupId>io.jmnarloch</groupId>
  <artifactId>rxjava-scheduler-spring-boot-starter</artifactId>
  <version>1.0.0</version>
</dependency>

The library brings one functionality, it allows to specify the Scheduler on the RxJava reactive types: rx.Observable and rx.Single in Spring’s declarative manner – through annotations.

The basic use case is to annotate your bean methods with either @SubscribeOnBean or @SubscribeOn annotations.

Example:


    @Service
    public class InvoiceService {

        @SubscribeOnBean("executorScheduler")
        public Observable<Invoice> getUnprocessedInvoices() {
            return Observable.just(
                ...
            );
        }
    }

The motivation here is to ease the integration with Spring Framework and be able to define within the DI container the application level scheduler. Why you want to do that? There are a couple of use cases.

For example you might need to provide a custom scheduler that can be aware of ThreadLocal variables, a typical use case is to pass logging MDC context, so that afterwords the thread running within the RxJava Scheduler can access the same context as the thread that triggered the task, but the applications go beyond that.

Other typical example is for instance customize your scheduler, not relaying on the build in. In order to for instance to limit the thread pool size, considering that the build in schedulers like IO scheduler are unbounded.

In case you want to simply relay on the RxJava predefined schedulers you can still use them with @SubscribeOn annotation.

    @Service
    public class InvoiceService {

        @SubscribeOn(Scheduler.IO)
        public Observable<Invoice> getInvoices() {
            return Observable.just(
                ...
            );
        }
    }

Spring Boot: RxJava

Back to posting, this will be a bit old since I had been working on this integration back in the February.

Interestingly enough I had already prepared a blog post related to this feature within Spring Cloud that added tight RxJava integration with Spring MVC controllers. Remarkably it turn out that the implementation in one of the previous milestones had a flow within it.

I’ve been interested in trying to find a solution towards the problem, though I was keen to support mostly the widely used REST like approach in which (mostly) the entire payload is being returned upon computation, in contrast to streaming the response over HTTP. This approach has been reflected in this small project:

https://github.com/jmnarloch/rxjava-spring-boot-starter

Which work out very well as a reference project, in which I had opportunity to try out different API implementations. On it’s own you can use this in your own project, since the proposed implementation depends only on the Spring Boot and Spring Framework’s MethodReturnValueHandler so if you simply using Spring Boot without additional features provided through Spring Cloud feel free to test it out.

Later the code of project become a baseline for the implementation proposed to

Spring Cloud approach

The final approach that has been implemented in Spring Cloud is a bit different, first of all the support for rx.Observable has been removed, instead you can use rx.Single in similar manner like DeferedResult, to which the underlying implementation in fact maps the RxJava type. The reference describes this a bit more in detail: http://cloud.spring.io/spring-cloud-static/spring-cloud.html#netflix-rxjava-springmvc

Funny enough at my company one of my colleagues had to later on migrate the code of one the projects from the rx.Observable to rx.Single, which he wasn’t really happy about 😉