Circuit Breaker Pattern

When preparing for my presentation at NTK, the largest IT conference in Slovenia, titled Cloud Patterns: How to write applications for the cloud, I explored different architectural patterns that have to do with reliability of our cloud applications.

One of the most interesting patterns, in my opinion, that isn’t often mentioned, is the Circuit Breaker pattern. Described in detail, in Michael Nygard’s Release it! it basically does what it’s electricity related sibling does. When it detects an anomaly (e.g. over-current) it opens, thus preventing any additional current to flow through until it is reset, manually or via some other mechanism.

External Services

One of the key topics of my presentation was decoupling our application from external dependencies. Most of our applications use an external service in one way or another. Unless the complete app, including its data is hosted in-memory, that is. But putting database servers aside for a moment, we also rely on other services, like Sendgrid, even Facebook. These services generally save us time (and therefore money), but they have an annoying tendency to fail. Granted, it doesn’t happen, but when it does, it’s out of our control.

There is no more “can fail”, there is only “will fail”…

Yes, if you decouple your application, use queues, etc., a disruption in Sendgrid’s service won’t be a big problem. Sure, some users won’t get their emails right away, but thanks to a properly implemented queue mechanism in our app, they will get it a bit later. But the problem is, that if the dependency is critical. For example, what if you use a payment gateway to process your credit cards? A disruption there will mean loss of revenue.

It is finally becoming common to implement Retry Policies in our code. So when a service goes down, what frequently happens is that our code will retry and retry and retry until the service is back up. Good? Yes, if it’s a transient fault, absolutely not if the service is down due to, perhaps a DDoS attack. If we look at it abstractly, what happens in these cases is that there is a sudden “flux” of communication between us and the external service that generates errors. Ideally, we would need a mechanism that would monitor this communication, and when it detects such an anomaly, would shut it down for a certain period of time, or until manually reset.

The Cirucit Breaker

Enter, the Circuit Breaker. It basically does exactly what I described above. It sits in between our application and another endpoint (external or internal service) and goes off when there is an anomaly detected. It can be described as a three-state state machine:

  • Closed; the default state where all operations pertaining to the other service are executed normally. In this state, if an operation results in an exception/error, a failure counter is incremented. When the counter reaches a certain threshold, the circuit breaker transitions into
  • Opened state where no operation is allowed through at all. At the transition, a timer is started. When the timer elapses, or there is a manual reset, the circuit breaker transitions into an
  • Half-opened state. This is basically a probationary state, where the first operation is executed, and if it succeeds, the circuit breaker is returned to the closed state. If it fails it transitions back into an open state.

I hope you are starting to see the potential uses already – you don’t have to flood the other server if it’s down. But there are more things you can do. For example, you can query the circuit breaker for its state. And with that information, you can design something that is even more valuable to your users.

Switchable Repositories

In my talk, I proposed building a switchable repository. If you follow Martin Fowler, you’d be inclined to call it a switchable gateway. This applies, of course, if you are using an external service (like SendGrid). However, to be fair, I actually use the Services – Repository architecture most of the time, because technically, to my code, the external service is nothing more than the container for “some” data.

To really get the most out of this pattern, you should be using Dependency Injection (side note, I gave a talk about this a long, long time ago and will probably blog about it soon). What you do in this case is implement a switchable repository and inject both third party accessing repositories into it. You then tell the DI container that the circuit breaker is the default implementation of that repository. After that, it’s basically just a glorified proxy.

The code for it is here.

You can download the entire sample application from this repository.

Using it

I envisioned using it with a DI framework, where you can bind an implementation of something like IDummyRepository to something akin to

It depends how you have implemented the repositories, and if they are generic enough for this to work, but this should give you a proper head start.

Understanding it

Let’s take a deeper look into how the repository actually works in conjunction with the Circuit Breaker. All operations follow the same principal, but let’s focus on the Get operation:

The trick here, is calling the RetriableWrapper method:

This method takes care of the grunt work. It creates a retry policy (note: this should, in production code, be handled elsewhere), and tries calling the actual action passed in through the parameter as long as the retry policy permits it. The Execute method of the circuit breaker is where the cool stuff happens:

The last piece of the puzzle is the property “Repository”:

Basically, what I do, is call an internal helper with a delegate that calls the method (e.g. Get, Add, Remove, whatever…) on the Repository property, which returns the proper repository based on the state of the circuit breaker. Each call, however, is made through the circuit breaker though. Note, however, that the circuit breaker is also separated based on the repository being used, so there’s an underlying property, called CircuitBreaker (I’m sorry about the un-original names):

The circuit breaker then executes the call, catches exceptions, and trips if the amount of failures was too high. In that case, it gives the service some time (defined in the TimeSpan timeout ) before switching to half-open state, where it behaves (to the outside) like it was closed again – if the call works at this time, it resets, if not, it trips back to open.

Wrapping it up

I hope this approach will be useful to someone out there. I use a derivative of this in my work, and I find it very helpful. The primary application is, as I’ve written in the beginning of this post, using it to switch to another external service, if one goes offline. The code on GitHub is open, use it if helps, let me know if you’d change something (or contribute!).

Anže Vodovnik is a proud father, enthusiastic guitarist and passionate software developer. He enjoys presenting at conferences sharing his experience of over 15 years of creating software. He was briefly a Microsoft MVP for Azure before forfeiting the title when he joined Microsoft UK, where he’s now working hand-in-hand with customers to help them develop and use solutions based on the Microsoft Azure platform.

View Comments