Bulk IP blocking is bad for your site

We've been receiving the same feature request at least a dozen times every month: "Can you add a bulk block/unblock feature added to the WAF's security exceptions log". In short, our clients are asking us to make it easy to block (blacklist) IP addresses in bulk. In other variations we are asked if we could export and import the IP blacklist or even have it synchronized between sites. These are dangerous features which would harm your site more than they would do any good. Since it's not obvious why, we decided to share some insight with you.

TL;DR: Let Admin Tools handle IP blacklisting for you and do enable the Project Honeypot integration. Don't blacklist IPs by yourself unless you understand the implications. Otherwise your bound to do more harm to your business than good.

Do you really know your client's IP address?

When a remote client (browser, app, automation script, etc) connects to your site it sends your server its Internet Protocol address, commonly known as "IP". An IP address looks either like four numbers 0-255 separated by dots such as 8.8.8.8 (IPv4) or one of the acceptable IPv6 representations such as 2001:db8::1. The IP address tells the server where to send the reply.

Things get complicated when you have a reverse proxy server, CDN or cache between your client and your server (they all work in the same way for the purposes of this blog post so I'll call them all "proxy"). The client no longer talks to your server directly, it talks to the proxy. The proxy sends the clients the response they were looking for. The proxy itself talks to the server on behalf of the clients. Here's the problem: the proxy has an IP address of its own which it has to send to the server, otherwise the server wouldn't know how to send back the response to the proxy. Therefore all requests –from legitimate clients and hackers alike– seem to be originating from the same IP address: the proxy's IP address.

In an attempt to fix that issue, web server and proxy maintainers decided to declare the real client's IP address in the X-Forwarded-For HTTP header. The proxy receives the request from the client, adds the X-Forwarded-For header with a value equaling the client's IP address and sends it to the server. Now the server knows both the proxy's IP address to send back the reply and the original client's IP address.

This is convenient but leaves us open to a lot of problems. If there's no proxy and you take into account the X-Forwarded-For HTTP header you are allowing an attacker to spoof their IP address, i.e. tell the server that the attack originates from a different IP address. That's why Admin Tools needs the "IP workarounds" switch: when you turn it off you are telling it to ignore the X-Forwarded-For header.

Assuming that your server now has the correct IP address of the real client trying to connect to your site it can block that IP address from accessing your site. But, is this a good thing?

No self-respecting hacker uses their own Internet connection

The IP address –especially an IPv4 address– is unique per Internet connection, not per computer, let alone a person. Therefore thinking that an IP address uniquely identifies an attacker is a big fallacy. In fact, every self-respecting hacker would never user their own Internet connection. They would use a public Internet connection (cafe, library, school, some poor guy's open WiFi network, ...) or a botnet (compromised computers used by real people). If you block one of these IPs they can move on to another. Only the very stupid hackers will use their home Internet connection, repeatedly, to attack you. Yes, they exist, they are called script kiddies and we'll see later how you can deal with them.

If you forever block the public IP of a cafe, library, school or real, unsuspecting people you are preventing their legitimate users from accessing your site forever. You will be none the wiser. When these people try to access your site and see a message that they are possibly hackers or something they will just shrug and move on to your competitor's site. Therefore blocking the wrong IPs is actively hurting your own business.

Most "attacks" are legitimate user errors

And now let's move to the argument that if an IP is in Admin Tools' log file of potential attacks it needs to be blacklisted forever. This is an ever bigger fallacy.

Do keep in mind that any Web Application Firewall –including Admin Tools– has a set of rules that are triggered by anything that looks like an attack to your site. These rules are designed with the goal to erradicate false negatives (letting an attack go by undetected) and minimise false positives (misdiagnosing a legitimate request as an attack). Since the perfect rule is about as real as the Yeti and the Loch Ness Monster (people claim to have seen them but are unable to provide any hard evidence) we have to err towards a side. The side everyone errs towards is false positives, i.e. some legitimate requests will be mistakenly recognized as attacks and get logged as such. Not to mention that if you are treating failed logins as attacks the IP of any user who mistypes their password will end up in the log.

See the problems so far? Being sure about the IP of the attacker is questionable, unless you know your server setup. Only script kiddies (inexperienced wannabe hackers) would use their own IP address to attack your site. Legitimate users' IP addresses will inevitably end up in the log file when they or a WAF rule screws up. Blocking these legitimate users' IP addresses hurts your business.

Therefore blindly blocking IPs in bulk will only hurt your business!

Synchronising blacklists is a DoS amplifier

The other aspect of the feature requests we are receiving is about automatically synchronising the IP blacklists between servers. This is a very dangerous proposition that can be used as a Denial of Service (DoS) amplifier.

The first problem with this proposition is the weak link. When you have n servers participating in a network of synchronised blacklists one of them acts as a master, keeping the definitive list. If the server is online / unreacahble the child server is hanging until the connection times out. If the child server is under a DoS attack it will be trying to send many of these requests to the master server and not being able to reach it because the bandwidth of the server is being saturated by the attack. Since each execution thread is handled for much longer than it should the Denial of Server attack succeeds with much less requests than would be required (you are depleting the server resources faster due to the long running requests!). Therefore your "protection" feature just acted as a Denial of Service aplifier, bringing your server down faster.

Then we have the concurrency problem. When a slave server adds an IP to the blacklist it sends a request to the master who now sends n-2 requests, one to each of the other slave servers. There's a total of n-1 requests for each blocked IP address. Each of these requests is computationally expensive both on the master and the slave server: the web server needs to load PHP, boot Joomla!, load all plugins, execute the component, verify the identity of the master server, check if the record already exists in the database, commit it and send a success message to the server. This only takes 1-5s per server but you see how a lot of activity can snowball on your server. Remember what we said about one slave server being under DoS? Now you have all your servers being under DoS.

Finally, if the authentication method is compromised, an attacker can block arbitrary IP addresses from accessing your server. Using some old fashioned social engineering / phishing they can get your IP address and block it before they launch an attack, making sure you won't be able to stop them until their job is done. But why it would be compromised in the first place, you ask. There are two methods for authenticating APIs to each other: cryptographic certificates and passwords (usually called "tokens"). The former is as bulletproof as it gets but doesn't work on many low end shared servers, i.e. the ones most likely to host a site under attack. So that leaves us with the token authentication. Without HTTPS on the site (of course the sites at most risk aren't using HTTPS to begin with!) the password is sent unencrypted over the Internet making it possible to compromise it.

So, this feature ends up being a massive mistake that can compromise your site. Big oops. Not to mention that Project Honeypot integration is much more efficient at doing the same thing. More on that later.

How does Admin Tools protects you against all that

Admin Tools is designed to handle IP-based blocks automatically, in two levels. Both can be configured in Configure WAF, Auto-ban Repeat Offenders.

The first level of protection is a temporary IP block of IP addresses that are a repeated source of security exceptions. For example you can tell Admin Tools to block IPs after 3 detected attacks in 1 minute, the block itself lasting 15 minutes. These are the default and recommended settings. 3 detected attacks in 1 minute is usually beyond a human accidentally screwing up (if you want to be more sure increase that to 3 attacks in 30 seconds). At the same time a hacker trying to work around this limit would have to slow down a potential attack so much that they'd simply abandon your site for an easier, more profitable target. The 15' block is enough to drive most automated attackers away (you are costing them too much bandwidth!) while not upsetting a real human who was blocked on error. So, basically, this feature alone is enough to deal with most attacks that you'd think deserve IP blocking action without human interaction.

The second level of protection is permanently blacklisting IPs which repeatedly end up in the IP blacklist or, as we call it, "IP blacklisting of persistent offenders". When this feature is enabled you can tell Admin Tools to add IPs in the permanent blacklist after, for example, 3 automatic IP blocks. This will catch the script kiddies we discussed above, i.e. newbie wannabe hackers who are using their own Internet connection to launch a (usually unattended) attack on your site. These are exactly the IPs you want to permanently blacklist. However, if the attacker is connecting from a public WiFi hotspot you are preventing everyone connecting from that hotspot to access your site. Is this an acceptable risk? Think about it before enabling this feature.

Finally, you can use the wisdom of the crowd to preemptively ban IPs of known hackers and spammers. This is performed through the integration with the free Project Honeypot service. You just need to register to Project Honeypot and enter your Project Honeypot HTTP:BL (HTTP Black List) key in Admin Tools' configuration. A network of sites acts as a honeypot (trap) for potential hackers and spammers, reporting their IPs to the Project Honeypot servers. When enough of those reports are gathered, the IP is marked as belonging to a spammer or hacker. Admin Tools queries Project Honeypot for each visitor's IP address. If it belongs to a known spammer or hacker it is automatically prevented from accessing your site. This works much better than any IP block list synchornisation feature can ever be implemented at the PHP application level.

What exactly should you do?

From Configure WAF, Auto-ban Repeat Offenders enable the automatic IP blocking of repeat offenders and disable the UP blacklisting of persistent offenders. Enable the Project Honeypot integration. This is more than enough to protect you against the kind of attackers that can be protected against using IP blocking techniques.

Cookies Notification - Action required

This website uses cookies to provide user authentication and improve your user experience. Please indicate whether you consent to our site placing these cookies on your device. You can change your preference later, from the controls which will be made available to you at the bottom of every page of our site.