Spring Cloud: Feign Vnd.error decoder

I’ve already spoiled this extension in one of the previous posts, but essentially I had made a custom Feign error decoder that co ops with Spring Cloud and will automatically handle any Vnd.error returned from remote service calls.

The extension is preatty much simple, you just drop it into your classpath:

Example pom.xml

<dependency>
  <groupId>com.github.jmnarloch</groupId>
  <artifactId>feign-vnderror-spring-cloud-starter</artifactId>
  <version>1.1.1</version>
</dependency>

or your build.gradle

compile 'com.github.jmnarloch:feign-vnderror-spring-cloud-starter:1.1.1'

From now on every Vnd.error recieved through Feign call will be automatically unmarshalled and used for populating VndErrorException giving you more structurized access to the error information.

The project source code is available at Github: https://github.com/jmnarloch/feign-vnderror-spring-cloud-starter

Server side setup is also really simple, all you need to do is define custom Spring @ExceptionHandler and build the VndError object and return it as your response:

@ExceptionHandler
public ResponseEntity error(Exception ex) {

    final VndError vndError = new VndError(RequestCorrelationUtils.getCurrentCorrelationId(), ex.getMessage());

    return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR)
            .header(HttpHeaders.CONTENT_TYPE, "application/vnd.error+json")
            .body(vndError);
}

You may ask yourself what is the reason of using Vnd.error in the first place? The first and far most gain is that you adapt a unified approach for representing your errors within your system which becomes more important if you expose the API for public use.

Hibernate 5 with Java 8

This is going to be a fallow up to the very first posts in which I introduced the small Hibernate wrapper.

This time I have released it’s updated version that compiles against just released Hibernate 5:

https://github.com/jmnarloch/hstreams/tree/hstreams-5

The changes has been rather cosmetic, although the Hibernate 5 brings one major change, it natively supports Java 8 types (mostly JSR-310 date time API – like LocalDate, LocalTime etc.) on the entity mapping level, so you no longer need extensions like Jadira in order to facilitate them.

The wrapper itself will add the syntatic sugar for Java 8 lambda like queries, so you will be able to fully laverage all of Java 8 features.

Spring Cloud: Zuul error handling

Have you ever bothered to understand how Spring Cloud Zuul proxy behaves in case of error? If everything works fine, your successful or errored request gets proxied, but what in cases that Zuul doesn’t have definition for specific service id, the connection itself can’t be established or the hystrix circut is opened? There are some edge cases in Zuul implementation that it is worth to know about.

Let’s see what happens if your Zuul proxy can not access the service from the discovery client. Simply because no service has been registered, or because all of the nodes are in DOWN state – for some reason. You might be suprised by the fact that when an error will be trigger in one of Zuul filters, it won’t be handled by your Spring registered exception handlers. Instead the exception will be propagated through servlet context up to your ‘/error’ mapped handler (though I had observed incosistent default behavior across different servlet containers). So if you run your app on Tomcat you might see an error page completly similar to the one below.

zuul_tomcat

Everything like you would expect? An HTTP 500 error status and default page that exposes the application stack trace to the client.

Let’s redo this “test” on Undertow (1.2.12), this time we have empty page and HTTP 200 – Success status. (This was actual cause of notorius error. Let’s assume that the the user just POST-ed “Login” information to Zuul proxied backend and recieved 200 due to error in forwarding. Your would rather not want that to happen in production.)

zuul_undertow

Both of the behaviours are far from perfect. What if only what you wanted to do is perform AJAX call through Zuul and this behaviour brakes you API? What if you coded your client with error handling logic that in cases of error would expect a JSON payload with the error details?

So let’s see how we can fix this. The Zuul error handling is being handled through RibbonRoutingFilter and SendErrorFilter and will forward any errored request to ${error.path}, which defaults to ‘/error’. In case you relay on the defaults that will be handled by Spring Boot’s BasicErrorController. You can overide this behaviour and implement your own ErrorController. Let’s try with one assumption, since we using Zuul reverse proxy preaty much all of our application is only serving static content (scripts, css, html etc.) so we only will be interested in returning an error to our frontent in easy to understand form. For this purpose let’s create a Vnd.error representation of the error.

Here is how we do this:

@Controller
public class VndErrorController implements ErrorController {

    @Value("${error.path:/error}")
    private String errorPath;
    
    @Override
    public String getErrorPath() {
        return errorPath;
    }

    @RequestMapping(value = "${error.path:/error}", produces = "application/vnd.error+json")
    public @ResponseBody ResponseEntity error(HttpServletRequest request) {

        final String logref = RequestCorrelationUtils.getCurrentCorrelationId();
        final int status = getErrorStatus(request);
        final String errorMessage = getErrorMessage(request);
        final VndError error = new VndError(logref, errorMessage);
        return ResponseEntity.status(status).body(error);
    }

    private int getErrorStatus(HttpServletRequest request) {
        Integer statusCode = (Integer)request.getAttribute("javax.servlet.error.status_code");
        return statusCode != null ? statusCode : HttpStatus.INTERNAL_SERVER_ERROR.value();
    }

    private String getErrorMessage(HttpServletRequest request) {
        final Throwable exc = (Throwable) request.getAttribute("javax.servlet.error.exception");
        return exc != null ? exc.getMessage() : "Unexpected error occurred";
    }
}

You may notice the RequestCorrelationUtils.getCurrentCorrelationId method from the previous post. What we done here is implementation of custom error handling controller that creates a VndError based on the servlet error attributes and returns it as response payload.

Undertow setup

Undertow is bit different and will require special handling. By default it dissalows handling request wrappers and throws exception in case that there are used. You need to configure it correctly in order to use it with Zuul.

    @Bean
    public UndertowEmbeddedServletContainerFactory embeddedServletContainerFactory() {
        UndertowEmbeddedServletContainerFactory factory = new UndertowEmbeddedServletContainerFactory();
        factory.addDeploymentInfoCustomizers(new UndertowDeploymentInfoCustomizer() {
            @Override
            public void customize(DeploymentInfo deploymentInfo) {
                deploymentInfo.setAllowNonStandardWrappers(true);
            }
        });
        return factory;
    }

We need to customize our deployment and set allow-non-standard-wrappers to true.

Java partial function application

I’ve have made a small experiment with Java 8 lambda features and as a result I’ve created this small utility library that implements pretty simple partial function application paradigm. It won’t be as much flexible as in other languages (like Scala) since the implementation is library based and Java does not have any language level support for this. So the constraint at this moment are that the function can take up to 5 different arguments.

Let’s see the code:

Providing that we have a simple matematic function:

public int substract(int a, int b) {
    return a - b;
}

We can use the method reference, assing it to functional interface and invoke it:

int sum = Partial.function(this::substract)
        .apply(1, 2);

The same effect we can achieve by applying arguments one by one and invoke the function at the end:

int sum = Partial.function(this::substract)
        .arg(1)
        .arg(2)
        .apply();

It’s even possible to apply the arguments from “right to left”:

int sum = Partial.function(this::substract)
        .rarg(2)
        .rarg(1)
        .apply();

What is noticable, and I wasn’t fully aware of this, is that you can assign varargs method reference to a functional interface having any number of arguments. Below are all valid assigments:

Function f = Functions::funcVarArgs;
Function1<String, String> f1 = Functions::funcVarArgs;
Function2<String, String, String> f2 = Functions::funcVarArgs;

You don’t have to use the Partial and directly assing the lambda or method reference to one of the Function interfaces. Although the Partial class creates a nice form of DSL for invoking the method in place.

The partial applications are staticly typed, example:

public String url(String scheme, String host, int port, String username, String repo) {
        return String.format("%s://%s:%d/%s/%s", scheme, host, port, username, repo);
}

String url = Partial.function(this::url)
                .arg("https")
                .arg("github.com")
                .arg(443)
                .arg("jmnarloch")
                .apply("funava");

The source code is available at the Github:

https://github.com/jmnarloch/funava

Spring Cloud: Request correlation

Sometimes developing might seem like reinventing the wheel and this post partially might look so, since it does not invents nothing new, just provides an implementation for the common problem – request correlation.

It’s truism to say that in microservice environment the HTTP (apart from any other data that is being exchanged like messages) requests might be propagated and processed by multiple individual services. It becomes not trivial task to trace those and you need additional information to be able to track them. Yet simple and proven method is to pass the unique identifier with each individual request. There are various approaches and no unified header exists for this approach. Across web you will be able find solutions that use X-Request-Id, X-Trace-Id or X-Correlation-Id. Those values are always bound to the upstream request through HTTP headers and in most cases also internally by each process to the currently processed thread.

Exactly the same approach we have been using in our project, through simple implementation:

https://github.com/jmnarloch/request-correlation-spring-cloud-starter

That integrates seamlessly with Spring Cloud and gives you fallowing:

  • Generation of request correlation id for any inbound request to the system through one of the edge gateway services.
  • Propagation on the requests identifier internally.

What it does exactly?

The extension adds a servlet filter that process any incoming request and populates it with unique identifier. Next you will need means to propagate those with any outgoing request. The extension approaches this issue with out of the box support for fallowing use cases:

  • RestTemplate – an interceptor is being registered to any RestTemplate bean
  • Feign clients – similary proper request intercepor exists for Feign
  • Zuul proxy routes – those will also propagate the request identifier

It configures proper request interceptors for the above or uses tweaks to the request itself to be able to transparently propagate this information.

Our applications:

  • Logging request id as MDC field.
  • Request tracing (though not as complex as Zipkin’s)
  • Vnd.errors logref population
  • Spring Boot Actuator auditing

Let’s make a workable example:

Let’s say that you want to aggregate all of your logs into central storage and used that later for analysis. We can use for this purpose Logstash with ElasticSearch and Kibana dashboard for visualization.

It turns out that there is a very simple way to configure Logstash within Spring Boot through logback logstash encoder. We had end up with adding a logstash.xml file in one of our utility modules that afterwards is being imported by the application modules. This looks as fallows:

<?xml version="1.0" encoding="UTF-8"?>

<included>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>

    <property name="FILE_LOGSTASH" value="${LOG_FILE:-${LOG_PATH:-${LOG_TEMP:-${java.io.tmpdir:-/tmp}}/}spring.log}.json"/>
    <appender name="LOGSTASH" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <encoder>
            <pattern>${FILE_LOG_PATTERN}</pattern>
        </encoder>
        <file>${FILE_LOGSTASH}</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
            <fileNamePattern>${FILE_LOGSTASH}.%i</fileNamePattern>
        </rollingPolicy>
        <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
            <MaxFileSize>10MB</MaxFileSize>
        </triggeringPolicy>
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <includeCallerInfo>true</includeCallerInfo>
        </encoder>
    </appender>

    <root level="INFO">
        <appender-ref ref="LOGSTASH"/>
    </root>
</included>

As you may notice the file itself imports the Spring Boot logback configuration so you can say that it extends it by adding additional appender.

Later on any application module can use this predefined configuration by imporinting the above file in it’s logback.xml configuaration.

<?xml version="1.0" encoding="UTF-8"?>
<configuration>

    <include resource="com/commons/logging/logback/logstash.xml"/>
</configuration>

Next we need to configure logstash to read the JSON encoded log file that LogstashEncoder will produce. To do that let’s setup central logstash deamon node and on every application node let’s use simple logstash forwarder – that is lighter and consumes less resources.

Logstash forwarder configuration may look like fallows:

{
    "network": {
        "servers": [
            "logstash.local:5043"
        ],
        "ssl certificate": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
        "ssl key": "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key",
        "ssl ca": "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt",
        "timeout": 15
    },
    "files": [
        {
            "paths": [
                "${ENV_SERVICE_LOG}/*.log.json"
            ],
            "fields": {
                "type": "${ENV_SERVICE_NAME}"
            }
        }
    ]
}

Finally we need our logstash configuration that will listen to logstash forwarder connections and process the input to finally persist them in ElasticSearch:

input {
    lumberjack {
        port => 5043

        ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder/logstash-forwarder.crt"
        ssl_key => "/etc/pki/tls/private/logstash-forwarder/logstash-forwarder.key"
    }
}
filter {
    json {
        source => "message"
    }
}
output {
    elasticsearch { host => "elasticsearch.local" }
}

What we have gained by that?
At this point logs from entire system are being stored in central server where they are indexed and analyzed by ElasticSearch. We can perform queries against them and use Kibana visualization features to display in form of chart to see for instance how they fluctuate over time.

If this is does not yet convince you, let’s see how we can handle errors now. If you configure your error handler to return Vnd.error and populate it’s logref with the request correlation id, the client might receive fallowing error in form of JSON response:

{“logref”:”c1bd6562-b28e-497e-9b49-1b4a4a106fe0″,”message”:”status 401 content:\n{\”error\”:\”unauthorized\”,\”error_description\”:\”Full authentication is required to access this resource\”}”,”links”:[]}

We can use logref value to perform the search in Kibana and find all the logs corresponding to the request that were performed across any service with for instance ERROR log level.

kibana_correlation_id

Go Continuous Delivery: Gradle task

If you haven’t hear yet about GO CD server, then you might want to take a quick look into it’s documentation: http://www.go.cd/documentation/user/current/ first.

The very first thing that distinguish it from any other Continuous Integration servers like: TeamCity, Bamboo, Jenkins is the fact that it gives you fully featured environment for automate not only your builds, but also your deployments.

It has pretty much more abstract concept of server agents – that can be used for both compiling and packaging your application as well as deploying it and running in your dev or production environments.

It takes some time to fully setup your build pipeline, but afterwards it is entirely worth it.

pipeline_build

The one thing that was lacking in my opinion in Go in general, is tasks that are targeted for specific languages/platforms build systems – like for example Gradle. It is still possible to run the Gradle as “Custom task” no matter whether you are using gradle command or gradlew wrapper script. Despite that I wanted to have more native GO CD support, this is way I had create my own GO CD Gradle plugin:

https://github.com/jmnarloch/gocd-gradle-plugin

The version 0.1 is already available for download, though I am actively working on preparing version that will be adjusted to latest Go JSON API.

You can install it by placing the distribution JAR in $GO_SERVER_HOME/plugins/external directory and restart the server.

After that you will be able to add Gradle tasks to your build pipelines. It gives you UI with similar look and feel as any other standard task:

 

go-cd-gradle-task

Depending on your settings you can either run your build using the wrapper script or configured gradle command. You may specify GRADLE_HOME environment variable if you want to specify that globally for your entire environment.

Spring Cloud: Fixing Eureka application status

Spring Cloud integrates seamlessly with Netflix Eureka, you can use it for discovering your services. It also has integration with Ribbon – client load balancer, which will be used whenever you will use RestTemplate or Fegin clients to make request to one of your registered services.

Although just recently I’ve discovered, that in case of Spring Cloud (version 1.0.3), applications registered in Eureka does not entirely reflect the actual application state. I’m referring here to Spring Boot Actuator’s health checks, those by default are not being propagated to Eureka. Eureka always happily announces that all applications are in UP state. As a result you may experience sitauation that despite the fact that your application is being in fact unhealthy, for instance due to backend service connectivity problem, other applications will be still sending traffic towards it. Generally this isn’t desired situation, so I would prefer that Eureka would have the accurate and up to date application status.

How to fix this?

Fortunetly there is a simple solution for that. Eureka expects the it’s client will configure HealthCheckHandler, that will be used with each Eureka heart beat to determine the current status. So let’s define our custom implementation that will do the job. Additionally it turns out that the default statuses defined by Actuator and Eureka matches each other perfectly, so everything that needs to be done is to determine current aggregated application status, map it to Eureka’s and return it.

Let’s jump directly to the implementation.

public class EurekaHealthCheckHandler implements HealthCheckHandler, ApplicationContextAware, InitializingBean {

    private static final Map<Status, InstanceInfo.InstanceStatus> healthStatuses = new HashMap<Status, InstanceInfo.InstanceStatus>() {{
        put(Status.UNKNOWN, InstanceInfo.InstanceStatus.UNKNOWN);
        put(Status.OUT_OF_SERVICE, InstanceInfo.InstanceStatus.OUT_OF_SERVICE);
        put(Status.DOWN, InstanceInfo.InstanceStatus.DOWN);
        put(Status.UP, InstanceInfo.InstanceStatus.UP);
    }};

    private final CompositeHealthIndicator healthIndicator;

    private ApplicationContext applicationContext;

    public EurekaHealthCheckHandler(HealthAggregator healthAggregator) {
        Assert.notNull(healthAggregator, "HealthAggregator must not be null");

        this.healthIndicator = new CompositeHealthIndicator(healthAggregator);
    }
    
    @Override
    public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
        this.applicationContext = applicationContext;
    }

    @Override
    public void afterPropertiesSet() throws Exception {

        final Map<String, HealthIndicator> healthIndicators = applicationContext.getBeansOfType(HealthIndicator.class);
        for (Map.Entry<String, HealthIndicator> entry : healthIndicators.entrySet()) {
            healthIndicator.addHealthIndicator(entry.getKey(), entry.getValue());
        }
    }

    @Override
    public InstanceInfo.InstanceStatus getStatus(InstanceInfo.InstanceStatus instanceStatus) {

        return getHealthStatus();
    }

    protected InstanceInfo.InstanceStatus getHealthStatus() {
        final Status status = healthIndicator.health().getStatus();
        return mapToInstanceStatus(status);
    }

    protected InstanceInfo.InstanceStatus mapToInstanceStatus(Status status) {
        if(!healthStatuses.containsKey(status)) {
            return InstanceInfo.InstanceStatus.UNKNOWN;
        }
        return healthStatuses.get(status);
    }
}

As you can see we have implemented our own HealthCheckHandler. Going from top you will notice our mapping between coresponding statuses. The next important part is CompositeHealthIndicator instance toward we delegated establishing the status of entire application based on any individual health check that has been registered within the application context. The remaining part becomes trivial.

We only need to register our health check handler.


@Configuration
public class EurekaHealthCheckHandlerConfiguration {

    @Autowired(required = false)
    private HealthAggregator healthAggregator = new OrderedHealthAggregator();

    @Bean
    @ConditionalOnMissingBean
    public EurekaHealthCheckHandler eurekaHealthCheckHandler() {
        return new EurekaHealthCheckHandler(healthAggregator);
    }
}

This will end the good old times where every application was in green (UP) state and force us to deal with real life problems 😉

 

Update: Sneak peaking Spring Cloud Netflix 1.1

The behavior described above is going to be an option for Eureka clients starting from Spring Cloud 1.1.

In order to turn it on you will need to set ‘eureka.client.healthcheck.enabled’:

eureka.client.healthcheck.enabled: true

Codility: Go

Couple of month back I wanted to remind myself some concept of Algorithimcs,  although reading books and learning teory as in most cases is not enough. I also wanted to learn it by solving some practical problems. There are plenty of pages on the web related to this topic like: Codility, Project Euler, LeetCode or LintCode to mention just few.

I’ve started by doing part of Codility open questions – Codility lessons. At that particular point in time I wanted to improve my pracical programming skills in Go – so I solved 40 example task entirely in that language together with base test cases in Ginkgo.

If you intereseted in finding solution for some of the problems you can find it in this Github project: https://github.com/jmnarloch/codility-go

Last week, Go 1.5 has been released so it might be good time for small update 😉

 

Spring Cloud: Node, Spring Cloud Sidecar and Consul

Spring Cloud Sidecar is part of Spring Cloud Netflix project that lets you setup a mediator beetwen your Spring Cloud system and any other application – non necessary written in a JVM language (generally other application that can’t easly integrate with Spring Cloud). The original idea was introduced by Netflix Prana project. This can be especially usefull when working with polyglot projects.

By setting Spring Cloud Sidecar you can gain bilateral benefits. First of all your cloud becomes aware of the application – it becomes discoverable and for instance you can monitor it through automated health checks. Second of all, all the services that are registered within cloud can be easier consumed be the service. You could lookup other service ip addresses or read configuration from central configuration server.

Setting sidecar project is fairly easy task to do, you only need a running discovery service  At this point Spring Cloud gives you two options – you can use Netflix Eureka or HashiCorp Consul (though there is ongoing efford to add Zookeper integration). I guess that there is a lot of materials on the web about setting up Eureka server within Spring Cloud project, so let’s do an experiment and use Consul instead.

Reminder: When writing this post the Spring Cloud Consul project was available only as M1, so I am using it only for showcasing.

Let’s start by setting up the project, let’s use Node as our runtime, although we could use any other language instead. I’m going to make my life easier here and use a Yeoman generator:


$ npm i -g generator-express-sidecar

$ yo express-sidecar

Choose your project name and select Consul as your discovery service.

express_sidecar_generator

You will be asked to provide the host and port, but you may relay on the default’s. The generator will install all of the Node package dependencies and will also download the dependencies for the sidecar project and assemble the project.

The project structure looks as fallows:


.
├── app
│ ├── app.js
│ ├── bin
│ ├── package.json
│ ├── public
│ ├── routes
│ └── views
└── sidecar
  ├── build.gradle
  ├── gradle
  ├── gradlew
  ├── gradlew.bat
  └── src

The app directory is a simple Express application, there is nothing extraordinary about it. The sidecar has our Spring Cloud mediator.

Before we continue we will need to run Consul. On OS X all you need to do is execute to install it:

brew cask install consul

On other systems you may need to download the binary distribution and install it manually. There is one optional step – you can download the UI interface for the Consul agent available here for download. Unzip it  in anywhere, the path will be specified when starting the agent.

We can now run the consul agent:


$ consul agent -server -bootstrap-expect 1 -data-dir /tmp/consul -ui-dir /consul/ui

You will have to adjust the -ui-dir parameter respectively.

Now, with consul running let’s start our application:

$ cd app && DEBUG=app:* npm start

Next is the sidecar project:

$ cd sidecar && ./gradlew bootRun

The application will listen on http://localhost:3000 and sidecar on http://localhost:5678. You can open both addreses in your browser.

Let’s verify that the application has been registered in Consul, if you have installed the ui module, navigate to http://localhost:8500/ui/. If everything went successfully, your dashboard should look similar:

consul_dashboard

In logs, you should see some health checks that sidecar does for you automaticly. You may want to also investigate the sidecar home page where you will find the autogenerated links to registered services that you can consume from within the Node application.

I hope that you enjoyed this short introduction, I plan to explore this a bit more and maybe in future post additional blog posts regarding this topic.

Yeoman: Scala in the browser

Recently I had made another small Yeoman generator, this time for Scala.js. The orignal code was based on Scala.js TodoMVC example.

The generator scaffolds a fullstack project entirely written in Scala language. The backend uses Play Framework, the frontend – Scala.js Angular. As you may expect the Scala.js Angular defines the bindings to AngularJS and whole web application is being compiled into JavaScript code.

Let’s see this in practice:


$ npm i -g generator-scalajs-angular

Create a new directory for your project and run the Yeoman for scaffolding your project:

 
$ mkdir scalajs-angular-app && cd $_
$ yo scalajs-angular 

Note: The first run will execute sbt compile which might take a long time due to download all of the project dependencies.

The directory structure looks as fallows:

.
├── build.sbt
├── project
│   ├── Build.scala
│   ├── build.properties
│   ├── plugins.sbt
├── scalajs
│   ├── src
└── scalajvm
    ├── app
    ├── conf
    ├── public

Where scalajvm is a Play backend, it servers the static content as well implements basic logic to persist the data in embedded H2 database.

In contrary scalajs directory contains the files that are compiled into JavaScript. How it looks like? More or less as regular Scala code:

@JSExport
object TodoApp extends JSApp {

  override def main() {
    val module = Angular.module("todomvc")

    module
      .controller[TodoCtrl]
      .directive[TodoItemDirective]
      .directive[EscapeDirective]
      .directive[FocusDirective]
      .filter[StatusFilter]
      .factory[TaskServiceFactory]
  }
}

There are two important parts here, first is the JSExport annotation, that indicates that this class should be exported into JavaScript. The second one is JSApp. Extending this class indicates that this is the entry point of the application and the main method should be executed when the application get’s loaded by the browser.

Let’s run this application:


$ sbt run

After the server starts open the http://localhost:9000 in your browser. You see similar page.

Scala Angular Screen

What is interesting is the fact that you can in fact debug your Scala code in the browser, just open Chrome Development tools or Firebug and you can step over your the application line by line. The Scala.js will automaticly generated the source map for your project.

Scalajs debuging

 

You can find more details on the project page:

https://www.npmjs.com/package/generator-scalajs-angular