Don't trust the first item in the X-Forwarded-For header

Any security-related use of X-Forwarded-For (such as for rate limiting or IP-based access control) must only use IP addresses added by a trusted proxy. Using untrustworthy values can result in rate-limiter avoidance, access-control bypass, memory exhaustion, or other negative security or availability consequences.

-- MDN's X-Forwarded-For article

Short version:

  • Do not take the first IP address listed in an X-Forwarded-For header, assume it's the public IP of the end user who sent you a request, and use it for rate limiting. An attacker can usually set it to whatever they like, and bypass your rate limiting.
  • If you're going to use the end user's IP address for anything security-related, like rate limiting, and you want to get that IP address from X-Forwarded-For, then your application needs to know how many reverse proxies it is running behind (and there mustn't be a way for the end user to bypass any of those reverse proxies).
  • If your web server framework provides some function that purports to get the end user's IP address by parsing the X-Forwarded-For header, but doesn't require you to tell it how many reverse proxies it's running behind, don't trust it. It is basically guaranteed to be vulnerable to spoofing.
  • If you're behind a single reverse proxy, you want to use the last IP address listed in X-Forwarded-For. If you're behind two reverse proxies, you want the second-last IP, and so on.

Long version:

There is an irksome mistake I've seen colleagues make a couple of times in my career now: adding an IP-based rate limiter to a web application that any attacker can trivially bypass by simply adding an X-Forwarded-For header with a randomly-generated IP address to each HTTP request they send. I'm writing this blog post to explain the problem and to give me something to link to next time I inevitably see the same mistake.

Suppose you have a web application running on servers behind some sort of load balancer (or perhaps behind multiple layers of load balancers and reverse proxies, but for the sake of simplicity, let's say there's just one). You want to implement an IP-based rate limiter in the application itself[1]. For instance, let's say you want to make it so that a given IP address can only make 10 login attempts per 5 minute interval, as a way to make it harder for attackers to brute-force users' passwords.

Since the requests to your application are coming via the load balancer rather than directly from the user's computer, the client IP address as seen by the application will be the IP address of the load balancer. Obviously, that's no good. Load balancers and reverse proxies try to solve this problem for you by sticking the the client's original IP in a header in the HTTP request, which is almost always called X-Forwarded-For.[2]

Because some applications have multiple reverse proxies in front of them (e.g. requests might first go through a load balancer, then through a caching reverse proxy like Varnish, and only then reach the actual web application), the X-Forwarded-For header can actually contain a comma-separated list of IP addresses, and when a proxy receives a request that already has an X-Forwarded-For header it's supposed to append the IP address it received the request from to the end of the header. That means that in typical circumstances, where the end user sends a request from their browser to your domain with no X-Forwarded-For header, the first (leftmost) element in that comma-separated list will be the public IP of the end user's computer. But you can't trust this! A malicious user can send a request that already has an

X-Forwarded-For: 123.123.123.123

header, and if your load balancer behaves in the normal way, it will append the actual public IP after that, and your application will receive something like this:

X-Forwarded-For: 123.123.123.123, 94.6.194.169

If you're naively treating the first element of the list as the user's IP address, and using that for rate limiting, then you've just allowed an attacker to trick you into seeing their address as 123.123.123.123 when that's not really the address the request came from. By changing the X-Forwarded-For header to a random IP address on each successive request, an attacker can totally circumvent your rate limiting.

What you need to do instead, if you have exactly one reverse proxy (such as a load balancer) in front of your application, is look at the last element of the X-Forwarded-For list, since that will be the one your reverse proxy set. If you have two layers of reverse proxies, you want the second-last IP address in the list (and if you have three layers of reverse proxies, you look at the third-last, and so on).

Note that this fundamentally requires that your application knows how many reverse proxies are sitting between it and the public internet.[3][4] If your application does not know this, it cannot determine, in a way that an attacker can't spoof, the public IP address from which the request was originally sent.

A corrolary of this is that if a web application framework purports to expose some method of getting the end user's IP address that magically parses headers like X-Forwarded-For, without having first to be configured in some way to tell it how many layers of reverse proxies it's running behind, then you should view that method with extreme scepticism, because it is almost guaranteed to be vulnerable to spoofing.

For example, consider Micronaut's HttpClientAddressResolver, recent use of which by a colleague of mine inspired this blog post. At the time that I write this post, its docs say:

You may need to resolve the originating IP address of an HTTP Request. Micronaut includes an implementation of HttpClientAddressResolver.

The default implementation resolves the client address in the following places in order:

  • The configured header

  • The Forwarded header

  • The X-Forwarded-For header

  • The remote address on the request

If you're planning on using the client's IP address for rate limiting or some other security-related purpose, then as soon as you read the documentation I quote above, alarm bells should go off! There's no hint of a way to configure Micronaut to know how many reverse proxies it's behind, so we can only assume that when it looks at the X-Forwarded-For header, it simply takes the first element in the list - the one the attacker can control - and sure enough, it does. (The fact that it will try multiple headers is also a problem, of course! If your reverse proxies only set X-Forwarded-For, an attacker can set Forwarded and your Micronaut app will give that priority over the X-Forwarded-For set by your reverse proxies.)

For an example of a framework that at least makes it possible to get this right using its built-in functionality, see Werkzeug, which requires you to configure it with an x_for parameter that tells it how many proxies it's behind. That's okay to use when you need an IP for rate limiting. Anything that just magically gets an IP address without needing such configuration categorically isn't okay to use.

A final note: although I'm writing this blog post in response to multiple colleagues getting this wrong at work, I want to be clear that it wasn't unusual for them to make that mistake. If you do a Google search for bypass rate limit x-forwarded-for, you will find multiple pages of results listing blog posts, articles, and LinkedIn posts by pentesters and other people whose job is to hack web applications, all advising that you try setting an X-Forwarded-For header to bypass rate limits... and you'll also find lots of publicly disclosed reports on HackerOne from bounty hunters who discovered such bypasses in applications they were testing. I find the existing writeups I've seen a bit frustrating because they just present setting X-Forwarded-For as a magical incantation you should try as an attacker and don't explain what mistake the defender must have made to be vulnerable to it, nor what the defender should've done instead; hopefully this post, written from a developer's perspective instead of a hacker's, can fill that void. But those search results do make a couple of important facts clear: developers of web application are screwing this up constantly in exactly the same way my colleagues did, and the bad guys are absolutely aware of it and will routinely try exploiting precisely this weakness to bypass your rate limits.

Stop letting them!


  1. Some will doubtless object here that such rate limits should be implemented in your load balancer, ideally by simply setting some config parameters that the load balancer offers to enable rate limiting, and that ordinary web developers should not usually end up reinventing the wheel by implementing IP-based rate limiters themselves. They probably have a point, but I don't think this is always true. Sometimes you have particular bits of functionality in your web application that warrant strict rate limits, like login, but you don't want to apply those strict rate limits across the entire application. In that case, trying to configure them in your load balancer may simply not be something the load balancer software you're using supports at all or else may be possible but create a maintenance nightmare where there's a list of endpoint paths or regexes configured in your load balancer to tell it which requests to rate limit, and you have to keep that config in sync with changes to your application. I'd rather roll my own rate limiter than deal with that. ↩︎

  2. Note that X-Forwarded-For doesn't have an official spec anywhere, unlike the Forwarded header specced in RFC 7239. In practice, though, I don't know of any software that supports only Forwarded and not X-Forwarded-For, and I do know of software - like Amazon's ALB - that supports only X-Forwarded-For and not Forwarded. Forwarded may be the official standard, but X-Forwarded-For is the de facto standard - even today, 9 years after RFC 7239 was published. ↩︎

  3. An import unstated premise here is that there needs to be a fixed number of reverse proxies between the application and the public internet! If going through the reverse proxies is optional - that is, if the application itself or any of the reverse proxies besides the outermost one are themselves accessible from the public internet - then an attacker will still be able to trick your rate limiter by bypassing the outermost reverse proxy and setting an X-Forwarded-For header. ↩︎

  4. An alternative strategy in theory would be to have your outermost reverse proxy discard any incoming X-Forwarded-For header and set a new one with a single IP in it, instead of appending to the incoming X-Forwarded-For header. Then your application would be able to safely use the first item in the X-Forwarded-For as the user's IP address, and better yet, the logic would be robust against adding or removing additional intermediate reverse proxies. This is definitely possible sometimes; for instance, you can configure, Nginx to act in this way by writing proxy_set_header X-Forwarded-For $remote_addr in your config. If this is an option for you, great! In practice, though, you may discover to your frustration that this simply isn't a behaviour your reverse proxy supports! For instance, AWS load balancers support three different ways of handling the X-Forwarded-For header but none of them is this desired behaviour of discarding the incoming header and setting a new one based on the IP of the incoming request. Given that much of the web runs on AWS these days, that limitation likely means a lot of web developers for whom this alternative approach simply isn't an option right now. ↩︎