Beware of the default ASP.NET Core Identity settings

Β· 1044 words Β· 5 minutes to read

The other day I was involved in setting up a new project based on ASP.NET Core Identity, when I noticed something related to the default configuration that I thought would be worth sharing here.

Of course, while in general it is not great to rely on default settings of any product (especially when it is the security backbone of your application!) one also expects sensible defaults to be provided.

Let’s have a look.

Password hashing in ASP.NET Core Identity πŸ”—

To facilitate hashing of passwords, ASP.NET Core Identity in .NET 6.0 uses a PBKDF2 function with one of two possible hashing algorithms:

  • HMAC-SHA1 (referred to as V2 in the API surface)
  • HMAC-SHA256 (referred to as V3 in the API surface)

Any further documentation is rather sparse, so let us dive a bit deeper here.

The password hash that gets produced by the ASP.NET Core Identity framework relies on the choice of the hashing algorithm, work factor (iteration count), and the unique salt size. The salt is a randomly generated value that is different for each user and each password, and is stored as part of the hash. The iteration count specifies how many times the PKDF2 function should be run to produce the hash.

The resulting hash is persisted into the Identity DB as a byte array, with different components of the hash stored at different positions in the array, and used for verifying the password upon user login.

Depending on whether the framework is configured with V2 or V3 logic (V3 is the default variant), the produced hash contains different information. Here is the breakdown of what is it made up of:

  • The first byte (byte index 0) is a format marker, used to indicate which version of the hashing logic is being used. It is set to 0x00 for V2 and 0x01 for V3
  • (Only in V3) The next 4 bytes (bytes indexed 1-4) store the KeyDerivationPrf algorithm used for the hash (HMAC-SHA256), represented as an unsigned 32-bit integer in big endian byte order
  • (Only in V3) The next 4 bytes (bytes indexed 5-8) store the work factor (iteration count) used for the hash, represented as an unsigned 32-bit integer, also in big endian byte order
  • (Only in V3) The next 4 bytes (bytes indexed 9-12) store the size of the salt used for the hash, represented as an unsigned 32-bit integer in network byte order.
  • The next 16 bytes (starting at byte index 13 in V3 and 1 in V2) store the actual salt used for the hash, where n is normally
  • Finally the next n bytes (starting at byte index 29 in V3 and 17 in V2) store the actual derived subkey produced by the PBKDF2 function, which is the final hashed password. The requested value of n defaults to 32 bytes.

The main reason behind using such scheme to create a hash is to prevent attackers from cracking passwords even if they manage to obtain the hashed password by making the password cracking process computationally expensive.

Because of the usage of the salt, a threat actor cannot simply perform a reverse lookup on the stored hash to find the original password. The salt ensures that each password hash is unique, even if two users have the same password - which in turn helps to prevent attackers from using rainbow tables to quickly look up a password hash and recover the original password that way.

The salt does not need to be secret - in fact, as we already discussed, it is part of the hash itself. But because the salt value is combined with the password using a one-way function, and the output of this function cannot be easily reversed to obtain the original password, the adversary must try different passwords, hash each one using the same salt value, and compare the resulting hash to the stored hash to see if they have found a match. And of course this must be repeated for every user, since each one has a unique hash.

On top of that, the high iteration count (work factor) of the PKDF2 function makes the whole procedure very computationally intensive.

And this leads us to the main topic for today. OWASP publishes guidelines for the work factor that should be used for the PBKDF2 function:

  • PBKDF2-HMAC-SHA1: 1,300,000 iterations
  • PBKDF2-HMAC-SHA256: 600,000 iterations
  • PBKDF2-HMAC-SHA512: 210,000 iterations

Of course the larger the iteration (work factor), the more complicated the hash computation becomes and the slower the hashing of passwords becomes. In turn, this means that so do login or registration operations in your application too, but - at the same time - that is the whole point.

The defaults in ASP.NET Core Identity in .NET 6.0 are 1,000 for HMAC-SHA1 and 10,000 for HMAC-SHA256. Now, HMAC-SHA1 is only there for backwards compatibility, is only used in V2 scheme, and is generally deprecated. However, HMAC-SHA256 is the default hashing mechanism used and the iteration count is surprisingly low (off by a factor of 60!).

Thankfully, it can be easily raised through a manual intervention, because in V3 (contrary to V2) it is actually configurable. This can be done by adding the following configuration line to your application startup code, when the services of your ASP.NET Core application are being configured:

services.Configure<PasswordHasherOptions>(opt => opt.IterationCount = 600_000);

This is then propagated to the built-in password hasher.

Changes in .NET 7.0 πŸ”—

Since the defaults in .NET 6.0 are really out of date, has it improved in .NET 7.0? As it turns out, yes it has. The out of the box behavior was changed in the codebase last year, but only made it into .NET 7.0 release, and was never backported to 6.0.

The default algorithm used for password hashing was switched to HMAC-SHA512 and the iteration count was set to 100,000. This is still lower than its OWASP recommended value of 210,000 iterations, so I anyway recommend that you tweak that manually.

When you upgrade your application from 6.0 to 7.0, and you were using the V3 scheme already before, then upon login, the framework will recognize if a given user’s password was hashed with the older algorithm (HMAC-SHA256) or with a lower work factor. In such cases, the password verification still succeeds, but the password is also rehashed using the new standards.

About


Hi! I'm Filip W., a cloud architect from ZΓΌrich πŸ‡¨πŸ‡­. I like Toronto Maple Leafs πŸ‡¨πŸ‡¦, Rancid and quantum computing. Oh, and I love the Lowlands 🏴󠁧󠁒󠁳󠁣󠁴󠁿.

You can find me on Github and on Mastodon.

My Introduction to Quantum Computing with Q# and QDK book
Microsoft MVP