You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@sentry.apache.org by Diego Fustes Villadóniga <df...@oesia.com> on 2017/06/06 06:20:15 UTC

Problems with Sentry, possibly Denial of Service

Dear colleagues,

We have installed a Cloudera kerberized cluster, version 5.11. We have enabled Sentry, in order to ensure autorization for Hive, Impala, Solr, etc. However, since then, we see downtimes in the Hive Metastore, apparently because of the connection with the Sentry server. This is the stacktrace of the metastore:

2017-05-31 12:10:15,539 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-5-thread-50]: MetaException(message:Failed to connect to Sentry service null)
        at org.apache.sentry.binding.metastore.SentryMetastorePostEventListener.getSentryServiceClient(SentryMetastorePostEventListener.java:308)
        at org.apache.sentry.binding.metastore.SentryMetastorePostEventListener.dropSentryPrivileges(SentryMetastorePostEventListener.java:351)
        at org.apache.sentry.binding.metastore.SentryMetastorePostEventListener.dropSentryDbPrivileges(SentryMetastorePostEventListener.java:318)
        at org.apache.sentry.binding.metastore.SentryMetastorePostEventListener.onDropDatabase(SentryMetastorePostEventListener.java:199)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_database_core(HiveMetaStore.java:1180)
        at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_database(HiveMetaStore.java:1212)
        at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
        at com.sun.proxy.$Proxy14.drop_database(Unknown Source)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_database.getResult(ThriftHiveMetastore.java:9005)
        at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_database.getResult(ThriftHiveMetastore.java:8989)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:735)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:730)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:730)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

On the other hand, before the metastore goes down, Sentry starts to throw this warning:

2017-05-31 12:26:35,619 WARN org.apache.thrift.server.TThreadPoolServer: Task has been rejected by ExecutorService 10 times till timedout, reason: java.util.concurrent.RejectedExecutionException: Task org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b<ma...@2e97d63b> rejected from java.util.concurrent.ThreadPoolExecutor@5c1d81c7[Running<mailto:java.util.concurrent.ThreadPoolExecutor@5c1d81c7[Running>, pool size = 500, active threads = 500, queued tasks = 0, completed tasks = 1107]

It seems that there are too many open connections to Sentry. May this be that some client is not closing them?
How can we overcome this error? We can't use the cluster given these downtimes....

Regards,

Diego


Diego Fustes Villadóniga, Arquitecto Big Data, CCIM


RE: Problems with Sentry, possibly Denial of Service

Posted by Diego Fustes Villadóniga <df...@oesia.com>.
Yes, that makes a lot of sense. Thanks a lot Alex

Diego

-----Mensaje original-----
De: Alexander Kolbasov [mailto:akolb@cloudera.com] 
Enviado el: martes, 6 de junio de 2017 18:05
Para: dev@sentry.apache.org
Asunto: Re: Problems with Sentry, possibly Denial of Service

It looks like SENTRY-1759 - looks like it should be back-ported to 5.11.

- Alex

> On Jun 6, 2017, at 3:46 AM, Diego Fustes Villadóniga <df...@oesia.com> wrote:
> 
> We have done this, and it seems that the service responsible for the high number of connections is Kafka, even though it is not actively used at the moment. It should happen when Kafka reloads the privileges cache, where it looks like it's not closing previous connections. 
> 
> Should we open a bug for Kafka?
> 
> Regards,
> 
> Diego
> 
> 
> -----Mensaje original-----
> De: Alexander Kolbasov [mailto:akolb@cloudera.com] Enviado el: martes, 
> 6 de junio de 2017 8:37
> Para: dev <de...@sentry.apache.org>
> Asunto: Re: Problems with Sentry, possibly Denial of Service
> 
> You should be able to see your connections with the netstat command. There might be a conncetion leak in the Hive code as well.
> 
> On Mon, Jun 5, 2017 at 11:20 PM, Diego Fustes Villadóniga 
> <dfustes@oesia.com
>> wrote:
> 
>> Dear colleagues,
>> 
>> We have installed a Cloudera kerberized cluster, version 5.11. We 
>> have enabled Sentry, in order to ensure autorization for Hive, 
>> Impala, Solr, etc. However, since then, we see downtimes in the Hive 
>> Metastore, apparently because of the connection with the Sentry 
>> server. This is the stacktrace of the metastore:
>> 
>> 2017-05-31 12:10:15,539 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler:
>> [pool-5-thread-50]: MetaException(message:Failed to connect to Sentry 
>> service null)
>>        at org.apache.sentry.binding.metastore.
>> SentryMetastorePostEventListener.getSentryServiceClient(
>> SentryMetastorePostEventListener.java:308)
>>        at org.apache.sentry.binding.metastore.
>> SentryMetastorePostEventListener.dropSentryPrivileges(
>> SentryMetastorePostEventListener.java:351)
>>        at org.apache.sentry.binding.metastore.
>> SentryMetastorePostEventListener.dropSentryDbPrivileges(
>> SentryMetastorePostEventListener.java:318)
>>        at org.apache.sentry.binding.metastore.
>> SentryMetastorePostEventListener.onDropDatabase(
>> SentryMetastorePostEventListener.java:199)
>>        at org.apache.hadoop.hive.metastore.HiveMetaStore$
>> HMSHandler.drop_database_core(HiveMetaStore.java:1180)
>>        at org.apache.hadoop.hive.metastore.HiveMetaStore$
>> HMSHandler.drop_database(HiveMetaStore.java:1212)
>>        at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>> DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:606)
>>        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
>> invokeInternal(RetryingHMSHandler.java:140)
>>        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
>> invoke(RetryingHMSHandler.java:99)
>>        at com.sun.proxy.$Proxy14.drop_database(Unknown Source)
>>        at org.apache.hadoop.hive.metastore.api.
>> ThriftHiveMetastore$Processor$drop_database.getResult(
>> ThriftHiveMetastore.java:9005)
>>        at org.apache.hadoop.hive.metastore.api.
>> ThriftHiveMetastore$Processor$drop_database.getResult(
>> ThriftHiveMetastore.java:8989)
>>        at org.apache.thrift.ProcessFunction.process(
>> ProcessFunction.java:39)
>>        at org.apache.thrift.TBaseProcessor.process(
>> TBaseProcessor.java:39)
>>        at
>> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
>> TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:735)
>>        at
>> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
>> TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:730)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>        at org.apache.hadoop.security.UserGroupInformation.doAs(
>> UserGroupInformation.java:1920)
>>        at
>> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
>> TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:730)
>>        at
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(
>> TThreadPoolServer.java:286)
>>        at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1145)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:615)
>>        at java.lang.Thread.run(Thread.java:745)
>> 
>> On the other hand, before the metastore goes down, Sentry starts to 
>> throw this warning:
>> 
>> 2017-05-31 12:26:35,619 WARN org.apache.thrift.server.TThreadPoolServer:
>> Task has been rejected by ExecutorService 10 times till timedout, reason:
>> java.util.concurrent.RejectedExecutionException: Task
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b<mailto:
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b>
>> rejected from java.util.concurrent.ThreadPoolExecutor@5c1d81c7[
>> Running<mailto:java.util.concurrent.ThreadPoolExecutor@5c1d81c7[Runni
>> n
>> g>, pool size = 500, active threads = 500, queued tasks = 0, 
>> g>completed
>> tasks = 1107]
>> 
>> It seems that there are too many open connections to Sentry. May this 
>> be that some client is not closing them?
>> How can we overcome this error? We can't use the cluster given these 
>> downtimes....
>> 
>> Regards,
>> 
>> Diego
>> 
>> 
>> Diego Fustes Villadóniga, Arquitecto Big Data, CCIM
>> 
>> 


Re: Problems with Sentry, possibly Denial of Service

Posted by Alexander Kolbasov <ak...@cloudera.com>.
It looks like SENTRY-1759 - looks like it should be back-ported to 5.11.

- Alex

> On Jun 6, 2017, at 3:46 AM, Diego Fustes Villadóniga <df...@oesia.com> wrote:
> 
> We have done this, and it seems that the service responsible for the high number of connections is Kafka, even though it is not actively used at the moment. It should happen when Kafka reloads the privileges cache, where it looks like it's not closing previous connections. 
> 
> Should we open a bug for Kafka?
> 
> Regards,
> 
> Diego
> 
> 
> -----Mensaje original-----
> De: Alexander Kolbasov [mailto:akolb@cloudera.com] 
> Enviado el: martes, 6 de junio de 2017 8:37
> Para: dev <de...@sentry.apache.org>
> Asunto: Re: Problems with Sentry, possibly Denial of Service
> 
> You should be able to see your connections with the netstat command. There might be a conncetion leak in the Hive code as well.
> 
> On Mon, Jun 5, 2017 at 11:20 PM, Diego Fustes Villadóniga <dfustes@oesia.com
>> wrote:
> 
>> Dear colleagues,
>> 
>> We have installed a Cloudera kerberized cluster, version 5.11. We have 
>> enabled Sentry, in order to ensure autorization for Hive, Impala, 
>> Solr, etc. However, since then, we see downtimes in the Hive 
>> Metastore, apparently because of the connection with the Sentry 
>> server. This is the stacktrace of the metastore:
>> 
>> 2017-05-31 12:10:15,539 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler:
>> [pool-5-thread-50]: MetaException(message:Failed to connect to Sentry 
>> service null)
>>        at org.apache.sentry.binding.metastore.
>> SentryMetastorePostEventListener.getSentryServiceClient(
>> SentryMetastorePostEventListener.java:308)
>>        at org.apache.sentry.binding.metastore.
>> SentryMetastorePostEventListener.dropSentryPrivileges(
>> SentryMetastorePostEventListener.java:351)
>>        at org.apache.sentry.binding.metastore.
>> SentryMetastorePostEventListener.dropSentryDbPrivileges(
>> SentryMetastorePostEventListener.java:318)
>>        at org.apache.sentry.binding.metastore.
>> SentryMetastorePostEventListener.onDropDatabase(
>> SentryMetastorePostEventListener.java:199)
>>        at org.apache.hadoop.hive.metastore.HiveMetaStore$
>> HMSHandler.drop_database_core(HiveMetaStore.java:1180)
>>        at org.apache.hadoop.hive.metastore.HiveMetaStore$
>> HMSHandler.drop_database(HiveMetaStore.java:1212)
>>        at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>> DelegatingMethodAccessorImpl.java:43)
>>        at java.lang.reflect.Method.invoke(Method.java:606)
>>        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
>> invokeInternal(RetryingHMSHandler.java:140)
>>        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
>> invoke(RetryingHMSHandler.java:99)
>>        at com.sun.proxy.$Proxy14.drop_database(Unknown Source)
>>        at org.apache.hadoop.hive.metastore.api.
>> ThriftHiveMetastore$Processor$drop_database.getResult(
>> ThriftHiveMetastore.java:9005)
>>        at org.apache.hadoop.hive.metastore.api.
>> ThriftHiveMetastore$Processor$drop_database.getResult(
>> ThriftHiveMetastore.java:8989)
>>        at org.apache.thrift.ProcessFunction.process(
>> ProcessFunction.java:39)
>>        at org.apache.thrift.TBaseProcessor.process(
>> TBaseProcessor.java:39)
>>        at 
>> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
>> TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:735)
>>        at 
>> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
>> TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:730)
>>        at java.security.AccessController.doPrivileged(Native Method)
>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>        at org.apache.hadoop.security.UserGroupInformation.doAs(
>> UserGroupInformation.java:1920)
>>        at 
>> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
>> TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:730)
>>        at 
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(
>> TThreadPoolServer.java:286)
>>        at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1145)
>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:615)
>>        at java.lang.Thread.run(Thread.java:745)
>> 
>> On the other hand, before the metastore goes down, Sentry starts to 
>> throw this warning:
>> 
>> 2017-05-31 12:26:35,619 WARN org.apache.thrift.server.TThreadPoolServer:
>> Task has been rejected by ExecutorService 10 times till timedout, reason:
>> java.util.concurrent.RejectedExecutionException: Task
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b<mailto:
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b>
>> rejected from java.util.concurrent.ThreadPoolExecutor@5c1d81c7[
>> Running<mailto:java.util.concurrent.ThreadPoolExecutor@5c1d81c7[Runnin
>> g>, pool size = 500, active threads = 500, queued tasks = 0, completed 
>> tasks = 1107]
>> 
>> It seems that there are too many open connections to Sentry. May this 
>> be that some client is not closing them?
>> How can we overcome this error? We can't use the cluster given these 
>> downtimes....
>> 
>> Regards,
>> 
>> Diego
>> 
>> 
>> Diego Fustes Villadóniga, Arquitecto Big Data, CCIM
>> 
>> 


RE: Problems with Sentry, possibly Denial of Service

Posted by Diego Fustes Villadóniga <df...@oesia.com>.
We have done this, and it seems that the service responsible for the high number of connections is Kafka, even though it is not actively used at the moment. It should happen when Kafka reloads the privileges cache, where it looks like it's not closing previous connections. 

Should we open a bug for Kafka?

Regards,

Diego


-----Mensaje original-----
De: Alexander Kolbasov [mailto:akolb@cloudera.com] 
Enviado el: martes, 6 de junio de 2017 8:37
Para: dev <de...@sentry.apache.org>
Asunto: Re: Problems with Sentry, possibly Denial of Service

You should be able to see your connections with the netstat command. There might be a conncetion leak in the Hive code as well.

On Mon, Jun 5, 2017 at 11:20 PM, Diego Fustes Villadóniga <dfustes@oesia.com
> wrote:

> Dear colleagues,
>
> We have installed a Cloudera kerberized cluster, version 5.11. We have 
> enabled Sentry, in order to ensure autorization for Hive, Impala, 
> Solr, etc. However, since then, we see downtimes in the Hive 
> Metastore, apparently because of the connection with the Sentry 
> server. This is the stacktrace of the metastore:
>
> 2017-05-31 12:10:15,539 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler:
> [pool-5-thread-50]: MetaException(message:Failed to connect to Sentry 
> service null)
>         at org.apache.sentry.binding.metastore.
> SentryMetastorePostEventListener.getSentryServiceClient(
> SentryMetastorePostEventListener.java:308)
>         at org.apache.sentry.binding.metastore.
> SentryMetastorePostEventListener.dropSentryPrivileges(
> SentryMetastorePostEventListener.java:351)
>         at org.apache.sentry.binding.metastore.
> SentryMetastorePostEventListener.dropSentryDbPrivileges(
> SentryMetastorePostEventListener.java:318)
>         at org.apache.sentry.binding.metastore.
> SentryMetastorePostEventListener.onDropDatabase(
> SentryMetastorePostEventListener.java:199)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$
> HMSHandler.drop_database_core(HiveMetaStore.java:1180)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$
> HMSHandler.drop_database(HiveMetaStore.java:1212)
>         at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
> invokeInternal(RetryingHMSHandler.java:140)
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
> invoke(RetryingHMSHandler.java:99)
>         at com.sun.proxy.$Proxy14.drop_database(Unknown Source)
>         at org.apache.hadoop.hive.metastore.api.
> ThriftHiveMetastore$Processor$drop_database.getResult(
> ThriftHiveMetastore.java:9005)
>         at org.apache.hadoop.hive.metastore.api.
> ThriftHiveMetastore$Processor$drop_database.getResult(
> ThriftHiveMetastore.java:8989)
>         at org.apache.thrift.ProcessFunction.process(
> ProcessFunction.java:39)
>         at org.apache.thrift.TBaseProcessor.process(
> TBaseProcessor.java:39)
>         at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
> TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:735)
>         at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
> TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:730)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1920)
>         at 
> org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
> TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:730)
>         at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(
> TThreadPoolServer.java:286)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
> On the other hand, before the metastore goes down, Sentry starts to 
> throw this warning:
>
> 2017-05-31 12:26:35,619 WARN org.apache.thrift.server.TThreadPoolServer:
> Task has been rejected by ExecutorService 10 times till timedout, reason:
> java.util.concurrent.RejectedExecutionException: Task
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b<mailto:
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b>
> rejected from java.util.concurrent.ThreadPoolExecutor@5c1d81c7[
> Running<mailto:java.util.concurrent.ThreadPoolExecutor@5c1d81c7[Runnin
> g>, pool size = 500, active threads = 500, queued tasks = 0, completed 
> tasks = 1107]
>
> It seems that there are too many open connections to Sentry. May this 
> be that some client is not closing them?
> How can we overcome this error? We can't use the cluster given these 
> downtimes....
>
> Regards,
>
> Diego
>
>
> Diego Fustes Villadóniga, Arquitecto Big Data, CCIM
>
>

Re: Problems with Sentry, possibly Denial of Service

Posted by Alexander Kolbasov <ak...@cloudera.com>.
You should be able to see your connections with the netstat command. There
might be a conncetion leak in the Hive code as well.

On Mon, Jun 5, 2017 at 11:20 PM, Diego Fustes Villadóniga <dfustes@oesia.com
> wrote:

> Dear colleagues,
>
> We have installed a Cloudera kerberized cluster, version 5.11. We have
> enabled Sentry, in order to ensure autorization for Hive, Impala, Solr,
> etc. However, since then, we see downtimes in the Hive Metastore,
> apparently because of the connection with the Sentry server. This is the
> stacktrace of the metastore:
>
> 2017-05-31 12:10:15,539 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler:
> [pool-5-thread-50]: MetaException(message:Failed to connect to Sentry
> service null)
>         at org.apache.sentry.binding.metastore.
> SentryMetastorePostEventListener.getSentryServiceClient(
> SentryMetastorePostEventListener.java:308)
>         at org.apache.sentry.binding.metastore.
> SentryMetastorePostEventListener.dropSentryPrivileges(
> SentryMetastorePostEventListener.java:351)
>         at org.apache.sentry.binding.metastore.
> SentryMetastorePostEventListener.dropSentryDbPrivileges(
> SentryMetastorePostEventListener.java:318)
>         at org.apache.sentry.binding.metastore.
> SentryMetastorePostEventListener.onDropDatabase(
> SentryMetastorePostEventListener.java:199)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$
> HMSHandler.drop_database_core(HiveMetaStore.java:1180)
>         at org.apache.hadoop.hive.metastore.HiveMetaStore$
> HMSHandler.drop_database(HiveMetaStore.java:1212)
>         at sun.reflect.GeneratedMethodAccessor62.invoke(Unknown Source)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
> invokeInternal(RetryingHMSHandler.java:140)
>         at org.apache.hadoop.hive.metastore.RetryingHMSHandler.
> invoke(RetryingHMSHandler.java:99)
>         at com.sun.proxy.$Proxy14.drop_database(Unknown Source)
>         at org.apache.hadoop.hive.metastore.api.
> ThriftHiveMetastore$Processor$drop_database.getResult(
> ThriftHiveMetastore.java:9005)
>         at org.apache.hadoop.hive.metastore.api.
> ThriftHiveMetastore$Processor$drop_database.getResult(
> ThriftHiveMetastore.java:8989)
>         at org.apache.thrift.ProcessFunction.process(
> ProcessFunction.java:39)
>         at org.apache.thrift.TBaseProcessor.process(
> TBaseProcessor.java:39)
>         at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
> TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:735)
>         at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
> TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:730)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1920)
>         at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$
> TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:730)
>         at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(
> TThreadPoolServer.java:286)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
>
> On the other hand, before the metastore goes down, Sentry starts to throw
> this warning:
>
> 2017-05-31 12:26:35,619 WARN org.apache.thrift.server.TThreadPoolServer:
> Task has been rejected by ExecutorService 10 times till timedout, reason:
> java.util.concurrent.RejectedExecutionException: Task
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b<mailto:
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess@2e97d63b>
> rejected from java.util.concurrent.ThreadPoolExecutor@5c1d81c7[
> Running<mailto:java.util.concurrent.ThreadPoolExecutor@5c1d81c7[Running>,
> pool size = 500, active threads = 500, queued tasks = 0, completed tasks =
> 1107]
>
> It seems that there are too many open connections to Sentry. May this be
> that some client is not closing them?
> How can we overcome this error? We can't use the cluster given these
> downtimes....
>
> Regards,
>
> Diego
>
>
> Diego Fustes Villadóniga, Arquitecto Big Data, CCIM
>
>