Skip to main content

Mitigating the risk of brute force login compromise using Redis cache in ASP.NET Core Identity

Any application that requires user authentication must take adequate steps to protect the user accounts for which it is responsible. This includes correctly handling workflows such as proper password hashing and storage, providing feedback that doesn't disclose information useful to an attacker, providing means for password reset, etc. The ASP.NET Core Identity membership system provides much of this functionality out-of-the-box, using tried and tested implementations that avoid common mistakes and pitfalls. It is an excellent platform on which to build when developing your application's authentication system.

ASP.NET Core Identity provides a means of mitigating brute force login attempts through user lockout. After a configurable number of failed login attempts, a user's account is locked for a period of time. Both the maximum number of attempts, and the lockout period, are configurable.

While this is certainly a valid strategy, it does have some weaknesses:

  • The system can be trivially abused by a malicious party to lock a user out of their account. That is, where the goal of the attacker is not necessarily to gain access to the account, but simply to prevent the user from accessing the account themselves. The classic example is an online auction site, where it could be beneficial to lock a competing bidder out of their account towards the end of an auction, so the attacker has less competition.
  • An attacker can cause an effective denial-of-service by locking out multiple accounts.
  • It is ineffective against automated attacks where a common password is attempted against multiple accounts.

The strategy below mitigates the risk as follows:

  • Login failures against the same account will have exponentially increasing wait times.
  • After a specific threshold of failed attempts has been reached against a single user, a CAPTCHA must be solved along with providing the credentials for that user.
  • After a specific threshold of failed attempts against all accounts has been exceeded, a CAPTCHA must be solved along with providing the credentials for all users.

Note that this assumes you are already enforcing strong password rules: minimum number of characters, allowing all characters, requiring certain types of characters, checking against known weak or compromised passwords, etc. Note also that code below is certainly not production-ready; it is utilizing poor practices such as hard-coded values (instead of reading from configuration), code duplication, more than one responsibility for a class, etc. The point is to demonstrate the concepts.

Here are the workflows:

GET Request

GET Request

POST Request

POST Request

For the demonstration below, I am using a very simple login page, based on the default provided by ASP.NET Core Identity, but the technique used could be adapted to most authentication logic.

Here is the page we are starting with:

Login.cshtml

Login.cshtml.cs

One more bit of setup: we need access to a Redis cache instance. (If you do not have a cache available, you can use the Windows Subsystem for Linux to install an instance on Ubuntu.) We then add a reference to the StackExchange.Redis package that provides a clean API for Redis:

dotnet add package StackExchange.Redis

OK, with that out of the way, we're ready to introduce our changes. The first thing to do is to increment our failure counts in the event of a failed login. We increment the counts both for the specific user, as well as for all users. First we add a reference to a shared connection to the cache at the top of Login.cshtml.cs, as well as two new methods to increment the failure counts. For the user failure count, we'll return the current failure count, as we're going to use this as part of our mitigation strategy.

On repeated failed login attempts for the same user, we will exponentially increase the time it takes for our server to respond. In the case of a legitimate user who accidentally fat-fingers a password, the delay will barely be noticeable. But by the time multiple failures have occurred, the delay will slow down malicious users attempting to compromise an account.

In the event of login failures:

This introduces an exponential delay in the following pattern:

Attempt Delay
1 1 seconds
2 3 seconds
3 7 seconds
4 15 seconds
5 31 seconds
6 1 minute 3 seconds
7 or more 2 minutes 8 seconds

After a few failed attempts, this is what we see in Redis:

This is a start, but it doesn't do much for automated attacks, which could simply throw a bunch of simultaneous attempts, or start a new one when any delay is detected. We will attempt to frustrate automated attempts by forcing a CAPTCHA to be solved whenever we detect suspicious activity. There's no reason we couldn't always make the CAPTCHA required—that would certainly be more secure. But for some users, a CAPTCHA can be a real source of frustration, and might detract from the usability of your site. This is something that needs to be determined on a site-by-site basis.

First we introduce a method that detects whether or not we require a CAPTCHA to be solved:

Login.cshtml.cs

And then before we attempt to validate credentials, verify the CAPTCHA if required:

Login.cshtml.cs

The ValidateGoogleRecaptchaAsync method is not explained in detail here, as that is outside our scope. At a high level, it will add errors to the model if a valid Google reCAPTCHA response is not included with the request. For more details, see Integrating Google reCAPTCHA v2 with an ASP.NET Core Razor Pages form. Here is a sample implementation:

Login.cshtml

appsettings.json (or secrets file)

We have introduced a bit of a gap, in that the first POST attempt from a legitimate user whose account has already exceeded the threshold is always going to fail (requiring a CAPTCHA), even if the credentials are correct. (The GET request, before knowing the user id, will not display the CAPTCHA). Since this is not a typical workflow, I think that is acceptable; a warning that something suspicious is happening with the account is returned.

To prevent account enumeration, we return the same messages, and introduce the same delays, even if the user id is completely unknown to our system.

So far we have handled an attack against a single user, but what about the situation where an attacker is attempting to compromise multiple accounts. For example, trying the same password against various user ids? Here is a starting implementation; a number of deficiencies will be addressed as we move along:

Login.cshtml.cs

We are mitigating the attack, but we are also introducing more friction to legitimate users than we would like. The biggest problem is that there is no "reset" on the failure counters. Once a user reaches the failure threshold, they will always be stuck completing the CAPTCHA. Same for all users; once a certain number of login failures have occurred, every user will have to complete a CAPTCHA, indefinitely.

Let's address the user threshold first. One way to handle this would be to reset the user failure counter upon successful login. In many cases, this might be a perfectly acceptable solution. One risk that it introduces is when a legitimate user is "fighting" with a malicious attacker/bot to gain access to the account at the same time. In this case, the legitimate user, upon successful login, is essentially opening the door for the attack to continue, at least for a few more attempts. A different strategy, but still very simple, is to just have the cache entry be invalidated after a certain amount of time:

We could handle the situation with login failures across all accounts similarly, but it would be difficult to come up with a threshold and expiration that would make sense. A single user can be expected to eventually remember their password, but most sites will be experiencing login failures across all users with some relative frequency. We want to discern the difference between the frequency pattern of our legitimate users making password errors, versus an automated attack attempting to brute force credentials. This will very much depend on the traffic to your site, but one strategy, used below, is to establish various time-based thresholds. For example, mitigation protection would be activated whenever any of the following are true:

  • More than 10 failed attempts in the last 1 minute.
  • More than 20 failed attempts in the last 5 minutes.
  • More than 60 failed attempts in the last hour.

To accomplish this, we need to complicate our cache structure a bit. Instead of a simple number, we are going to use a Sorted set, storing each login failure as a separate record "scored" by its timestamp. When we look to determine if our thresholds have been reached, we will use ZCOUNT to quickly count the number of failures in each range. The only thing we then need to do is take care of cleanup--truncating the set with values that are completely outside of our ranges. We do this on every read in the example below. If that is inefficient for your usage, you could trigger the cleanup on a batch process, for example. Finally, we add an expiration to the key as well—in the case of no login failure activity (unlikely, but still...), the key will eventually clear itself out.

Here are the changes in Login.cshtml.cs:

This provides at least a baseline strategy for detecting and frustrating brute-force login attacks against your users, without overly frustrating legitimate login attempts.

Comments

Popular posts from this blog

Integrating Google reCAPTCHA v2 with an ASP.NET Core Razor Pages form

CAPTCHA (completely automated public Turing test to tell computers and humans apart) reduces the likelihood of automated bots successfully submitting forms on your web site. Google's reCAPTCHA implementation is familiar to users (as these things go), and is a free service. Integrating it within an ASP.NET Core Razor Pages form is straightforward.Obtain Google API keyThe first thing we need to do is to sign up for an API key pair. The latest documentation for this process can be found on the reCAPTCHA Developer's Guide. Google will provide us with a site key and a secret key. I am storing these keys in the Visual Studio Secret Manager so they are not accidentally checked in with version control. To access the Secret Manager, right-click on the project and select "Manage User Secrets...":No, these are not my real keys...As part of the sign up process for the reCAPTCHA keys, we specify the domains for which the keys are valid. Be sure to add "localhost" if you…

Detecting the user's time zone at registration

This article walks through the process of capturing and detecting a user's time zone at the point of registration. This information can then be used to display time-related information in the local zone of the user. Note that, in most cases, you should still store datetime information in UTC; the time zone is only used for local display.This setup assumes a site running Razor Pages in ASP.NET Core 2.2, although there's nothing particularly framework-specific in what follows. The same ideas should carry over quite easily to, for example, a regular ASP.NET MVC site.We start with a basic registration page:Register.cshtml@page@model RegisterModel<h1>Create an account</h1><form method="post"><div asp-validation-summary="ModelOnly"></div><div class="form-group"><label asp-for="FirstName"></label><input asp-for="FirstName" class="form-control"/><span asp-validation-for…