You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-issues@jackrabbit.apache.org by "Thomas Mueller (JIRA)" <ji...@apache.org> on 2013/02/18 21:57:14 UTC

[jira] [Commented] (OAK-634) PasswordUtility.isSame() performance bottleneck

    [ https://issues.apache.org/jira/browse/OAK-634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13580830#comment-13580830 ] 

Thomas Mueller commented on OAK-634:
------------------------------------

> On a successful login the record should be updated to contain a password hash with just one iteration

I agree that a repeated login should be fast. So your solution would be a simple LRU cache for known-good passwords, but the key is one iteration of the password (not the plain text) to avoid keeping the password in memory? Yes I think that would make sense.

> The record should also keep track of unsuccessful login attempts

Yes, that would be nice, but the question is where / how exactly to limit.

> though instead of SHA-256 we should be using something like bcrypt

I agree using our own algorithm (as we do now) should be avoided. I would prefer PBKDF2 over bcrypt, because AFAIK PBKDF2 is the industry standard. I recently implemented PBKDF2 for SHA-256, the source code is at http://code.google.com/p/h2database/source/browse/trunk/h2/src/main/org/h2/security/SHA256.java#145 (I also used a low number of iterations actually, but the reason is different - the problem is that calculating a hash is slow on Android). There are also test cases, test vectors are here: http://stackoverflow.com/questions/5130513/pbkdf2-hmac-sha2-test-vectors

                
> PasswordUtility.isSame() performance bottleneck
> -----------------------------------------------
>
>                 Key: OAK-634
>                 URL: https://issues.apache.org/jira/browse/OAK-634
>             Project: Jackrabbit Oak
>          Issue Type: Bug
>          Components: core
>            Reporter: Jukka Zitting
>              Labels: performance
>
> The default 1000 SHA-256 iterations used for password hashes are seriously impacting the performance of login() calls. Here's a performance report of the number of milliseconds that a successful login takes with Jackrabbit 2.x and Oak (with an in-memory MK):
> {noformat}
> # Login                                  min     10%     50%     90%     max
> Jackrabbit                               560     570     577     704    1522
> Oak-Memory                              2537    2586    2630    2811    2916
> {noformat}
> Over 50% of that time is spent doing hash iterations in {{PasswordUtility.isSame()}}. This is a problem for two main reasons:
> # It severely drags down performance of acquiring a new session; something which should be essentially free.
> # It opens the denial of service attack vector of just bombarding a system with login attempts, which would cause CPU usage to spike.
> Iterating a password hash is a good idea for preventing offline attacks against a stolen password database (though instead of SHA-256 we should be using something like bcrypt that's explicitly designed and analyzed for this purpose), but the current implementation doesn't make much sense in a scenario like ours where we can expect dozens or hundreds of logins per second even in normal non-peak use cases. Password iteration makes more sense in use cases where logins are infrequent (e.g. once a day per user) and persisted through something like a session key.
> So, assuming we want to keep the cost of an offline attack high, here's what I suggest we do for password-based logins:
> * Switch to bcrypt or a similar password hashing algorithm if possible.
> * For each active user in the system, keep an in-memory record to speed up login calls.
> ** On a successful login the record should be updated to contain a password hash with just one iteration (calculated from the plain text password provided in the successful login). Use this instead of the in-repository password hash for authenticating further login attempts.
> ** The record should also keep track of unsuccessful login attempts and limit them to at most N attempts per minute to prevent DOS attacks.
> The result of such in-memory record keeping should be to massively speed up normal logins (point 1 above) and also to cap the CPU use of the potential DOS attack (point 2) to O(N*K) cycles per minute, with K being the total number of users in the system.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira