Avoid Blocking Calls. Go async!

Introduction

Let’s imagine for a second that we are cooking pasta with some kind of sauce. Would it make sense to pour the water into the pot for it to boil and keep staring at it until it finally boils, so that we can finally add the pasta? Not so much, huh? We would be blocking ourselves until the water is ready, while, in the meantime, we could be chopping vegetables for the sauce, instead. Well, that’s synchronous. What if we could make our calls to downstream dependencies and services that way, instead of just standing there waiting for the response? Let me show you!

Synchronous blocking calls

First, let’s finish talking about what is synchronous.
Synchronous communication requires that each end of an exchange of communication responds in turn without initiating a new communication. Each successive transmission of data requires a response to the previous transmission before a new one can be initiated.
When calling external services from our applications over HTTP, we naturally think of these requests as something synchronous, since HTTP is a synchronous protocol in itself. Thus, we perform a request and wait for its response; then, we make another request and wait, and so on. This leads to bottlenecks. Therefore, the bigger our number of requests, the slower we will respond. In addition to this, the request is blocking the execution thread until a response is given by the server. Hence, we have another bottleneck in performance, since we won't be able to perform any further task until the request is fulfilled.

Asynchronous calls

As opposed to what we discussed before, asynchronous calls are by definition non-blocking (asynchronous non-blocking as usually seen, it’s for clarification only and also redundant.) This means the actual request will fire up the execution of the task on “background” while another thread will handle the response with callbacks and events for the different outcomes. By doing this, we have the possibility to continue performing tasks until we get the reply to our request, as well as improved performance and reduced bottlenecks. We can achieve this behavior using libraries like AsyncHttpClient (Java) which manages a single thread to get the responses, plus it does not have some of the limitations OkHttp has. For example, regardless of its asynchronous callbacks, it creates a new thread for every call and leaves it blocked waiting for the response and since it manages a limited thread pool internally, you can ran out of available threads very easily.

For example, as seen here, we can make asynchronous (non-blocking) calls without using futures:

import org.asynchttpclient.*;  
import java.util.concurrent.Future;

AsyncHttpClient asyncHttpClient = new DefaultAsyncHttpClient();  
asyncHttpClient.prepareGet("http://www.example.com/").execute(new AsyncCompletionHandler<Response>(){

    @Override
    public Response onCompleted(Response response) throws Exception{
        // Do something with the Response
        // ...
        return response;
    }

    @Override
    public void onThrowable(Throwable t){
        // Something wrong happened.
    }
});

Another way to do this is to use Java's CompletableFuture, like this:

import static org.asynchttpclient.Dsl.*;

import org.asynchttpclient.*;  
import java.util.concurrent.CompletableFuture;

AsyncHttpClient asyncHttpClient = asyncHttpClient();  
CompletableFuture<Response> promise = asyncHttpClient  
            .prepareGet("http://www.example.com/")
            .execute()
            .toCompletableFuture()
            .exceptionally(t -> { /* Something wrong happened... */  } )
            .thenApply(resp -> { /*  Do something with the Response */ return resp; });

// do something else

promise.join(); // wait for completion  

Conclusion

If we want to get the most out of our calls to downstream dependencies or services and prevent that from becoming the culprit of latencies and bottlenecks, we need to make our calls asynchronous and avoid having blocking operations.

Libraries

AsyncHttpClient Page

OkHttp