It’s dangerous to go alone: take our guide to the “IDEAL” HTTP client!

Cover for It’s dangerous to go alone: take our guide to the “IDEAL” HTTP client!

Welcome, digital explorer, to the vast terrain of microservices. Whether you’re navigating these challenging lands on your own, or comfortably residing in the serenity of a monolithic architecture, an HTTP client is an essential companion for every developer. But it’s not just about having one; it’s about equipping yourself with the right (or I.D.E.A.L) one. In this guide, you’ll learn essential techniques to ensure your trusty HTTP client is configured to avoid potential pitfalls. Plus, you’ll get insights about the advantages of separating a client code layer from the application.

Let’s be honest—the modern web application is a complex beast. The days where it was possible to craft everything by yourself have long since faded into the past. Current applications are mostly a collection of services communicating further with each other to accomplish a task.

And within this world, the HTTP client is the most important tool in your arsenal.

This guide is about understanding the best practices for any HTTP client and how to leverage them to your advantage. This guide is not limited to backend applications (although most of the examples are in Ruby), the same principles are applicable for any language and platform, even the frontend.

Let’s discuss the format of this post (a guide to a guide, so you know we mean business, here.)

First, the mandatory pathway will guide you through the essentials of stability, debugging, documentation, and clarity. This foundational knowledge will ensure you’re always prepared for what lies ahead. Although the mandatory pathway is mandatory, it’s worth noting that we have no real means to enforce this policy.

Next, the opinionated route will show the art of constructing a robust HTTP client. You’ll dive into the nuances of the abstraction layer, understand the importance of domain modeling, and arm yourself with advanced testing techniques that are not just about detecting issues, but preempting them.

After reading this guide, you’ll be equipped with a powerful and properly configured HTTP client that will serve you well in even the most challenging environments. (Even underwater levels.) And perhaps, most importantly, you’ll more fully understand the value of assigning acronyms to pretty much any organization, process, or concept.

Let’s get started on our I.D.E.A.L. HTTP client journey!

I — Investing in stability (mandatory)

HTTP communications are unstable by nature. The network is always unreliable, and services are unpredictable. When venturing into the unknown, the first thing you need is some sense of stability.

Bad things happen so you need to be prepared for the worst.

Timeouts

Timeouts are the most important aspect of stability; you need to define a limit to how long you’ll wait for a response. Don’t let your services just hang indefinitely.

Timeouts should be set to a reasonably low duration—several seconds is often sufficient. If you’re using Ruby, the default timeouts (60 seconds) are usually too high for most requests. (You might wonder why these default timeouts are set so high. The reason is compatibility; high timeouts help avoid errors in older code.)

Now, wait a second, timeout! You may be asking—if high defaults are meant to prevent errors, shouldn’t that mean it’s beneficial for my application? The answer is no. Failing to set appropriate timeouts can result in numerous requests from a borked service hanging and blocking your entire application, instead of allowing it to degrade gracefully while missing some functionality.

Moreover, slow responses negatively impact your customers. If the UI is blocked with long requests, you’re doing your customers a cruel injustice and robbing them the chance to retry.

This is particularly important to consider because data packets can sometimes be lost during transmission due to bad network conditions. (And there won’t be a delivery person you can yell at to make you feel better about it.) Moreover, while users might tolerate a few seconds of delay, if they find themselves waiting for minutes, they’re likely to leave your application.

And here’s the most important consideration: you might not even notice the presence of slow requests might go unnoticed. It’s much harder to identify sluggish performance in your application when there are no clear complaints coming in. And customers, for whatever reason, are often reluctant to clearly communicate issues.

So go with the motto of Erlang (or Elixir if you’re feeling young)—fail fast and loud, at least you’ll know something’s wrong.

First of all, you need to set timeouts for all external requests:

Net::HTTP.start(host, port, open_timeout: 2, read_timeout: 5, write_timeout: 5) do
  # ...
end

Broadly speaking, timeouts are tremendously important for any resilient application so you should set them wherever possible—be it the database, cache, or within other services.

Error Handling

As we already know, errors are inevitable in HTTP communications, but it’s how you deal with them that will really determine your resilience in harsh environments. The most important thing is always expecting and handling HTTP errors gracefully. If you don’t, they’ll bubble up and crash your application. And, in most cases, you don’t want this:

timeout_errors = Net::OpenTimeout, Net::ReadTimeout, # ...

Rails.error.handle(timeout_errors) do
  Net::HTTP.get # ...
end

But please don’t overdo it. Remember: excessive error handling can mask the real problems in your application.

Also, keep this in mind about error reporting: you don’t want to flood your error reporting system with the same errors for some small network disruption. This is because you need to be able to group them by the error type, so rather than using generic StandardErrors, you should always use custom error classes.

This is a quick little trick to craft a consistent error hierarchy:

class HTTPError < StandardError; end

class ClientError < HTTPError; end # 4xx
class ServerError < HTTPError; end # 5xx

class NotFoundError < ClientError; end # 404
# ...

This will organize your errors by domain and help handle errors in a more concise way:

begin
  EvilMartiansAPI::Client.new.search_chronicles_by_lang(params[:lang])
rescue NotFoundError
  render status: :not_found
rescue ServerError => e
  Rails.error.report(e)
end

Retries

Since networks are unreliable, and there’s no way to avoid this, you should attempt to retry request to simplify things when recovering from sneaky flapped errors:

http = Net::HTTP.new(host, port)
http.max_retries = 3

Retries are a powerful tool, but you need to be careful with them; they’re useful for some types of errors but not for all of them.

Generally, you can safely retry idempotent requests (GET, HEAD, PUT, DELETE, OPTIONS, TRACE), but you should never retry non-idempotent requests (POST, PATCH).

Retrying non-idempotent requests can result in numerous duplicate entries in the system. One way to guard against this is by using a unique, one-time ID for every request and checking for duplicates before retrying. However, not all upstream services support this method. Additionally, be cautious of misbehaving backends where supposedly idempotent endpoints may produce erroneous side effects.

Connection timeout errors are generally safe to retry, but you need to be careful with read and write timeouts, as requests may still be completed after your client has disconnected.

Also, be mindful of the potential for further service disruptions. If you retry too often and too frequently, you risk generating a large volume of duplicate requests, and this may hinder service recovery. At the very least, you should limit the number of retries and the time between them.

For example, you may retry several times with exponential backoff:

Retriable.retriable do
  EvilMartiansAPI::Client.new.search_chronicles_by_lang # ...
end

Exponential backoff is an effective strategy to avoid overloading the downstream service with retries. Distributing your retries randomly over time should suffice.

In computer science, this issue is commonly known as the “thundering herd problem”. That’s a pretty cool name for a problem, but the result is not so cool. This problem occurs when multiple processes, all awaiting a specific event, are simultaneously triggered once that event takes place, leading them to compete for the same resource. You can use a generic exponential backoff algorithm for your purposes such as Retriable or client-specific plugin implementations. A more advanced solution to this problem is to implement a circuit breaker pattern to prevent overloading the service with retries.

Using a circuit breaker, after several failed requests to a downstream service, the breaker “opens”, causing all new requests to fail instantly. After a set timeout, it enters a “half-open” state, allowing limited test requests. If these succeed, the breaker “closes”, indicating the service has recovered. If they fail, the breaker remains open, waiting for another timeout before retesting.

For example, Circuitbox is a nice implementation of this pattern in Ruby. As it requires data storage, it’s a bit more complex, but it’s a good way to protect your application from cascading failures:

Circuitbox.circuit(:evil_martians, exceptions: [Net::OpenTimeout, Net::ReadTimeout]) do
  Net::HTTP.get # ...
end

And remember, repeated attempts without breaks or reconsideration might lead you further astray.

D — Debugging uncertainties (mandatory)

As we all know, every adventurer faces challenges out in the world. Like for example, having a bunch of bugs in your sleeping bag. So at one time or another, you’ll be destined to debug some problems. (We made the bug thing work.)

Logging and monitoring

During your journeys, make it a habit to keep logs of external requests:

EvilMartiansAPI.configure do |config|
  config.logger = Rails.logger
end

One of the real challenges with logs is that you often don’t realize you needed them until you’re already faced with a problem that could’ve been diagnosed if they had been in place from the start.

While it might seem like extra work up front, we strongly suggest implementing them early on, especially if your distributed workflow is complex and spans transactional boundaries across various services. Your future self will thank you. (Unless your future self has already been corrupted by evil forces, then, all bets are off. Still, take this advice.)

With error reporting and logging established, let’s move on to external request metrics, an often overlooked aspect in monitoring. It’s essential to monitor the regular flow of client requests and measure normal response times. This enables you to identify issues early and adopt a proactive approach by understanding how requests perform over a sufficiently long period of time.

All common APM tools (New Relic, Skylight, Datadog, and so on) provide a way to monitor external requests. If you’re not using any of them, you might try a self-hosted, battle-tested solution like Yabeda to monitor your application.

Even though it may be more time consuming to implement, a custom solution can be beneficial since you can mix it with your internal business metrics and get a more holistic view of your system.

User-agents

Sometimes things can go wrong on our side, such as implementing a nasty retry loop. If you’re a good citizen, you’ll have to identify your HTTP client to others:

EvilMartiansAPI.configure do |config|
  config.user_agent = [
    Rails.application.class.name.deconstantize.underscore,
    Rails.env,
    config.user_agent,
    EvilMartiansAPI::VERSION,
  ].join(" - ") # => "dummy_application - production - evil_martians_api_client - 1.0.0"
end

It’s a responsible practice to identify your client with a custom user-agent. This will help identify your requests in logs and for tracking them in external services. Including a version number will also help to track the client’s version and to detect problems with older versions.

Correlation ID and tracing

You’ve already seen correlation IDs in contemporary applications; a famous request ID from the Rails world is a sort of correlation ID. This is a unique ID attached to every request that’s passed between services to track the request and its state.

It’s also a nice idea to add a correlation ID to your HTTP client to track requests to external services in the logs and to correlate them within request and response pairs:

Net::HTTP.get_response(url, { "X-CORRELATION-ID" => request.request_id })

E — Exploring the client (mandatory)

A well-constructed HTTP client is akin to a comprehensive survival manual; it’s quick when first setting things up, and also easy to use in the long run.

Configuration

The configuration of the HTTP client is the most important part of the client’s presentation, and it should be clear and easy to change. The best way to achieve this is to use some sort of configuration DSL. We may be biased here, but we recommend Anyway Config:

class EvilMartiansAPI
  class Config < Anyway::Config
    config_name :evil_martians

    attr_config :host
    attr_config open_timeout: 2
    attr_config read_timeout: 5
    # ...
  end
end

You may want to separate the configuration into several files. This is a good way to keep your configuration clean and directed to a single service. With this approach, you’ll always be able to tame a misbehaving service just by changing its configuration.

The really great thing about Anyway Config is that you may override the configuration from the environment variables. And believe us, there will be a situation when the next big thing in the branch is still undeployed, and the team needs to increase the timeout of the external service to make it work. With Anyway Config, this is just a matter of setting an environment variable:

export EVIL_MARTIANS_OPEN_TIMEOUT=10
export EVIL_MARTIANS_READ_TIMEOUT=25

One last minor thing to consider is the thread safety of the configuration. If you’re using an instance-based configuration approach, this isn’t a concern. However, if you opt for a global, preinitialized object for the HTTP client, you’ll need to ensure that the shared configuration isn’t altered between threads. This is a common pitfall, especially since there’s often a need to adjust the configuration for different endpoints. While Rails makes multithreading “mostly ignorable” in applications, it doesn’t eliminate the inherent risk of different code paths altering the same global object.

By the way, the best configuration is no configuration. So, try to keep the defaults sane for end users.

Performance

Much like a seasoned adventurer wouldn’t start a journey without a proper travel plan (good snacks are essential to keep up morale), you can’t afford to overlook the performance of an HTTP client over the long run.

One of the most common performance pitfalls is overlooking memory exhaustion due to large file downloads or uploads.

A standard mistake is to load an entire file into memory, which can lead to performance bottlenecks or even application crashes. Streaming large files instead of loading them into memory is nearly a required practice.

You can use a ready-to-use solution like the Down gem, or if you prefer, you can build your own custom solution leveraging the streaming capabilities that most HTTP clients provide:

tempfile = Down.download("https://example.com/")
tempfile #=> #<File:/tmp/down-net_http20231231-25610-2x2wv8>

This issue can be particularly relevant to consider, as otherwise, it may go unnoticed for years. It might not catch your attention until, let’s say, a customer decides to upload their 1GB kitten photo archive to your application. You definitely don’t want to be caught unprepared in this situation; that would not be very a-meow-sing!

Moving on to performance optimization, the most important part of this process is meaningfulness. Don’t forget to fine-tune your HTTP client, at the very least, based on application-specific performance metrics. What works best will vary depending on the unique needs of each individual application. Remember that simple solutions rock because they have fewer moving parts, which means less potential for errors.

One popular technique to improve performance is to avoid the overhead of establishing a new connection for every request:

Faraday.new(url: host) do |connection|
  # ...
  connection.adapter :net_http_persistent
end

Persistent connections eliminate the overhead of establishing a new connection for every request, which is especially useful if you’re making multiple requests to the same host, like in microservices architecture.

The downside of persistent connections is that they accumulate state, which may lead to annoying problems, such as mysteriously half-broken connections or resource leaks; your monitoring system should be ready to detect these issues. Additionally, persistent connections aren’t always faster because of connection pooling dependance, so it’s advisable to benchmark your application to find the best approach.

To add to the “less work, more performance” mantra, you can also consider HTTP caching. It’s a great way to reduce the number of requests to external services. This approach is useful for requests that are not changed often, but beware of the cache invalidation issues; this is believed to be one of the hardest problems to deal with, so be extra wary.

Another perspective on performance concerns parallelism, which is an excellent way to boost your application’s performance. However, implementing this can introduce significant complexity into the code. You may find it easier to switch to a different programming paradigm, such as the one demonstrated in the next example using Typhoeus, or something more asynchronous like Async::HTTP:

hydra = Typhoeus::Hydra.new
requests = 2.times.map do
  Typhoeus::Request.new("https://example.com/").tap do |request|
    hydra.queue(request)
  end
end
hydra.run
responses = requests.map do |request|
  request.response.code
end #=> [200, 200]

It’s not always possible to parallelize requests. For instance, if you’re using an external service with a strict rate limit, this performance optimization technique simply can’t be utilized.

Standardization

The last mandatory part of the HTTP client to consider is standardization. It doesn’t matter if we’re dealing with a fully-featured client library or just an one-method HTTP request, it’s important to have a standardized way to handle external requests in an application. This will help your teammates mitigate the persistent need to investigate quirks of the next “best HTTP client”.

We do not want to recommend any particular tool here. Honestly, in a world where microservices in different languages are common, it’s just a bad idea to choose “the best”. As a matter of fact, it’s up to you to decide what’s best for your project. But, just be sure to choose one and stick to it to avoid integration spaghetti. (Also worth noting, spaghetti may not really be an appropriate snack for adventuring, either. Go for foods in bar form.)

A — Advancing with abstractions (opinionated)

You may want to stop reading here and start your adventure with your properly configured HTTP client. The previous mandatory parts of this guide should be enough to get going. By following those recommendations, you’ll have a solid foundation for your minimal HTTP client.

But for those willing to ride shotgun with us, we now present the opinionated part of this guide.

As discussed earlier, thinking carefully about how you integrate between services is important; ideally, you want to standardize on a small number of types of integration. But the entire application consists of different layers of abstractions. So, it’s important to choose the right level of abstraction for your HTTP client, too.

HTTP client as a library

This part of the guide is opinionated because implementing a fully-featured HTTP client as a separate library comes with a trade off: time. (Just like taking the scenic route and carrying the goods to set up a beautiful picnic. It can be worth it, but there are considerations).

Implementing a separate library can seem like the natural way of doing things since there are clear boundaries between the application and the external service, and it’s easier to reuse common HTTP clients while custom business logic is clearly separated. But if you’re working with a service that is not so popular, it could be difficult to justify a full-featured integration.

However, consider at least a minimal library implementation. Many of the techniques covered in previous chapters are easier to implement if you have a full-featured HTTP client library. But mostly it’s all about communicating your intentions to the team after all. We’re introducing lots of artificial layers in our applications to make it more maintainable. So, why not do the same with the external services and encapsulate them in a separate library?

Faraday and its ecosystem

For illustration purposes, we’ll use Faraday as a great example of a layered HTTP client library. It’s a good idea to use it as a reference point for your own HTTP client library. Faraday is like that multipurpose tool every adventurer wishes to have; its versatility can be credited to its modular design, and you can always swap out the components to suit your needs better:

Faraday.new(url: host) do |connection|
  connection.basic_auth(basic_login, basic_password)

  connection.options.open_timeout = 2
  connection.options.timeout = 5

  connection.headers[:user_agent] = # ...

  connection.response :logger, Rails.logger
  # ...
end

It’s possible to use Faraday as a communication adapter for various HTTP client libraries. These can be changed up based as your requirements dictate, ranging from the simple Net::HTTP to the more complex Async::HTTP::Faraday.

Faraday.new(url: host) do |connection|
  # ...
  connection.adapter :net_http_persistent
end

It’s also possible to use Faraday as a middleware stack. It’s similar to Rack middleware, but for external HTTP requests. There are ready-to-use plugins for a large variety of tasks: from easy things like JSON request and response parsing, to more advanced topics like automatic retries. There is even a directory of community plugins. It’s a great way to reuse existing code and avoid reinventing the wheel:

Faraday.new(url: host) do |connection|
  connection.use :http_cache, store: Rails.cache

  connection.use :evil_martians_raise_http_error

  connection.request :json
  connection.response :json
  # ...
end

But this doesn’t mean that you absolutely must use Faraday; you can choose any other library that suits your needs. The most important thing to take away is Faraday’s experience of constructing a layered HTTP client library. A client library does not have to be flat. You may separate the HTTP client into several reused layers: configuration, unified error handling and error hierarchy, retry, logging and monitoring facilities, API client, and domain models.

Wait, what are the domain models?

L — Laying out a sound structure (opinionated)

Strong preparation is the key to success, and real-world survivalists know this, too. Again, snacks are key here, but also it’s important to have a sound structure for your HTTP client in oder to make it easy to use and maintain.

Typed domain models

You can work with requests and responses from external services as with plain hashes, but this is error-prone and hard to maintain. This is the same as working with a database without any ORM. There’s a way to make them more reliable with typed domain models.

The trick here is to use a well-defined structure to represent the data. It’s a good practice to use a separate class for it to isolate the entity and to make it more maintainable. This is also a nice place to define type and validation logic for the data.

There are several different gems in the Ruby ecosystem to implement typed domain models. You may have already heard about the most popular ones like ActiveModel::Model, dry-struct, Hashie, BloodContracts, and so on. They all have their pros and cons, but the most important part is to choose one and stick to it.

An example of a typed domain model might look as simple as this:

class Response
 include ActiveModel::Model

 attribute :id, :integer
 attribute :name, :string
 attribute :created_at, :datetime
 attribute :updated_at, :datetime
end

Typed models make it easier to understand code by using named fields instead of plain hashes. They also establish a contract between the application and external services, helping to prevent subtle errors that might occur if something changes in an external service in production. While serving a similar purpose to documentation, typed models are generally more reliable and easier to maintain.

Testing

The final section (which is by no means of less importance than the others) of our guide concerns testing.

We also highly recommend using a contract testing approach to ensure that your HTTP client works as expected. You may utilize a complex solution like Pact, or just use VCR for these tests. Although there’s some opposition to VCR within the Ruby community because of the effort required to maintain cassette recordings, we believe it still isn’t a good idea to use mocks and stubs in HTTP client tests; it’s better to use VCR to record and replay real HTTP interactions, so you’re not caught off-guard when the actual scenario plays out.

One really neat trick to minimize re-recording struggles is to include preparation and cleanup phases for external services directly within the test files. While it may sound unconventional, it’s still common practice to manually prepare test data in these tests—things like test users in the auth microservice, required data in third-party providers, and so on. Why not automate this process by incorporating it into the test itself (the phases run ONLY when recording cassette):

RSpec.describe EvilMartiansAPI::Client, vcr: true do
  let(:client) { described_class.new }

  let(:developer_team_number) { 42 }

  vcr_recording_setup { client.create_developer(developer_team_number) }
  vcr_recording_teardown { client.delete_developer(developer_team_number) }

  it 'starts the project with a freshly created developer in the team' do
    client.start_the_next_big_thing([developer_team_number])
  end
end

With this approach, you’ll always can re-record the cassette without any additional efforts. Delete the cassette, run the test, and it’ll be re-recorded with the automatically prepared data. Simple as that.

This straightforward, common-sense approach can save huge amount of time and effort while re-recording cassettes. It’s particularly useful if you’re working with a large number of external services, as is common in a microservices architecture. The approach also makes tests more readable.

Recap of newly obtained survival skills

In the challenging world of services, the HTTP client is an indispensable tool. Just make sure to properly configure it to make it work—and to work well:

  • Use timeouts to avoid hanging requests
  • Be prepared to handle inevitable errors
  • Use retries to mitigate temporary network issues
  • Log and monitor your HTTP client to understand what’s going on
  • Be identifiable by the external service
  • Mark your requests with unique identifiers to track them
  • Keep configuration separated and easy to change
  • Stream large files to avoid memory exhaustion
  • Use persistent connections, HTTP caching and parallelism to improve performance
  • Standardize the way you integrate with external services
  • Do not be afraid to craft a full-featured HTTP client library
  • Think about the layered structure of your HTTP client
  • Prefer VCR in testing to bring scenarios nearer to the real world

Just as every adventurer respects their gear, it’s time for us to give some love to our trusty HTTP clients. Good luck on your journey, explorer!

At Evil Martians, we transform growth-stage startups into unicorns, build developer tools, and create open source products. If you’re ready to engage warp drive, give us a shout!

Join our email newsletter

Get all the new posts delivered directly to your inbox. Unsubscribe anytime.