You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@sentry.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2018/03/13 03:01:00 UTC

[jira] [Commented] (SENTRY-1855) Improve scalability of permission delta updates

    [ https://issues.apache.org/jira/browse/SENTRY-1855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16396464#comment-16396464 ] 

Hadoop QA commented on SENTRY-1855:
-----------------------------------

Here are the results of testing the latest attachment
https://issues.apache.org/jira/secure/attachment/12881397/SENTRY-1855.002.patch against master.

{color:red}Overall:{color} -1 due to an error

{color:red}ERROR:{color} failed to apply patch (exit code 1):
The patch does not appear to apply with p0, p1, or p2



Console output: https://builds.apache.org/job/PreCommit-SENTRY-Build/3696/console

This message is automatically generated.

> Improve scalability of permission delta updates
> -----------------------------------------------
>
>                 Key: SENTRY-1855
>                 URL: https://issues.apache.org/jira/browse/SENTRY-1855
>             Project: Sentry
>          Issue Type: Bug
>          Components: Sentry
>    Affects Versions: 2.0.0
>            Reporter: Alexander Kolbasov
>            Assignee: Na Li
>            Priority: Major
>             Fix For: 2.1.0
>
>         Attachments: SENTRY-1855.001.patch, SENTRY-1855.002-sentry-ha-redesign.patch, SENTRY-1855.002.patch, SENTRY-1855.003-master.patch, SENTRY-1855.01-sentry-ha-redesign.patch
>
>
> Looking at the latest stress runs, we noticed that some of the transactions could fail to commit to the database (with Duplicate key exception) after exhausting all the retries.
> This problem has become more evident if we have more number of clients connecting to Sentry to issue the permission updates. Was able to reproduce consistently with 15 clients doing 100 operations each.
> In the past we introduced exponential backoff (SENTRY-1821) so as part of test run increased the defaults to 750ms sleep and 20 retries. But even after this, the cluster still shows up the transaction failures. This change also increases the latency of every transaction in sentry.
> We need to revisit this and come up with a better way to solve this problem.
> {code}
> 2017-07-13 13:18:14,449 ERROR org.apache.sentry.provider.db.service.persistent.TransactionManager: The transaction has reached max retry number, Exception thrown when executing query
> javax.jdo.JDOException: Exception thrown when executing query
> 	at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
> 	at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:252)
> 	at org.apache.sentry.provider.db.service.persistent.SentryStore.getRole(SentryStore.java:294)
> 	at org.apache.sentry.provider.db.service.persistent.SentryStore.alterSentryRoleGrantPrivilegeCore(SentryStore.java:645)
> 	at org.apache.sentry.provider.db.service.persistent.SentryStore.access$500(SentryStore.java:101)
> 	at org.apache.sentry.provider.db.service.persistent.SentryStore$11.execute(SentryStore.java:601)
> 	at org.apache.sentry.provider.db.service.persistent.TransactionManager.executeTransaction(TransactionManager.java:159)
> 	at org.apache.sentry.provider.db.service.persistent.TransactionManager.access$100(TransactionManager.java:63)
> 	at org.apache.sentry.provider.db.service.persistent.TransactionManager$2.call(TransactionManager.java:213)
> --
> 	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:971)
> 	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887)
> 	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823)
> 	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435)
> 	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582)
> 	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2530)
> 	at com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1907)
> 	at com.mysql.jdbc.PreparedStatement.executeUpdate(PreparedStatement.java:2141)
> 	at com.mysql.jdbc.PreparedStatement.executeBatchSerially(PreparedStatement.java:1773)
> 	... 33 more
> 2017-07-13 13:18:14,450 ERROR org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor: Unknown error for request: TAlterSentryRoleGrantPrivilegeRequest(protocol_version:2, requestorUserName:hive, roleName:2017_07_12_15_06_38_1_2_805, privileges:[TSentryPrivilege(privilegeScope:DATABASE, serverName:server1, dbName:2017_07_12_15_06_38_1_2, tableName:, URI:, action:*, createTime:1499904401222, grantOption:FALSE, columnName:)]), message: The transaction has reached max retry number, Exception thrown when executing query
> java.lang.Exception: The transaction has reached max retry number, Exception thrown when executing query
> 	at org.apache.sentry.provider.db.service.persistent.TransactionManager$ExponentialBackoff.execute(TransactionManager.java:255)
> 	at org.apache.sentry.provider.db.service.persistent.TransactionManager.executeTransactionBlocksWithRetry(TransactionManager.java:209)
> 	at org.apache.sentry.provider.db.service.persistent.SentryStore.execute(SentryStore.java:3330)
> 	at org.apache.sentry.provider.db.service.persistent.SentryStore.alterSentryRoleGrantPrivilege(SentryStore.java:593)
> 	at org.apache.sentry.provider.db.service.persistent.SentryStore.alterSentryRoleGrantPrivileges(SentryStore.java:633)
> 	at org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessor.alter_sentry_role_grant_privilege(SentryPolicyStoreProcessor.java:256)
> 	at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$alter_sentry_role_grant_privilege.getResult(SentryPolicyService.java:997)
> 	at org.apache.sentry.provider.db.service.thrift.SentryPolicyService$Processor$alter_sentry_role_grant_privilege.getResult(SentryPolicyService.java:982)
> 	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> 	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)