You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2017/03/12 16:25:04 UTC
[jira] [Commented] (AMBARI-20408) Atlas MetaData server start fails
while granting permissions to HBase tables after unkerberizing the cluster
[ https://issues.apache.org/jira/browse/AMBARI-20408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15906576#comment-15906576 ]
Hadoop QA commented on AMBARI-20408:
------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12857524/AMBARI-20408_trunk_01.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings.
{color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings.
{color:red}-1 core tests{color}. The test build failed in ambari-server
Test results: https://builds.apache.org/job/Ambari-trunk-test-patch/10985//testReport/
Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/10985//console
This message is automatically generated.
> Atlas MetaData server start fails while granting permissions to HBase tables after unkerberizing the cluster
> ------------------------------------------------------------------------------------------------------------
>
> Key: AMBARI-20408
> URL: https://issues.apache.org/jira/browse/AMBARI-20408
> Project: Ambari
> Issue Type: Bug
> Components: ambari-server
> Affects Versions: 2.5.0
> Reporter: Vivek Sharma
> Assignee: Robert Levas
> Priority: Critical
> Labels: system_test
> Fix For: 2.5.0
>
> Attachments: AMBARI-20408_branch-2.5_01.patch, AMBARI-20408_trunk_01.patch
>
>
> STR
> 1. Deploy HDP-2.5.0.0 with Ambari-2.5.0.0 (secure MIT cluster installed via blueprint)
> 2. Express Upgrade the cluster to 2.6.0.0
> 3. Disable Kerberos
> 4. Observed that Atlas Metadata server start failed with below errors:
> {code}
> Traceback (most recent call last):
> File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 249, in <module>
> MetadataServer().execute()
> File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 282, in execute
> method(env)
> File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 720, in restart
> self.start(env, upgrade_type=upgrade_type)
> File "/var/lib/ambari-agent/cache/common-services/ATLAS/0.1.0.2.3/package/scripts/metadata_server.py", line 102, in start
> user=params.hbase_user
> File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
> self.env.run()
> File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
> self.run_action(resource, action)
> File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
> provider_action()
> File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 262, in action_run
> tries=self.resource.tries, try_sleep=self.resource.try_sleep)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
> result = function(command, **kwargs)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
> tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
> result = _call(command, **kwargs_copy)
> File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
> raise ExecutionFailed(err_msg, code, out, err)
> resource_management.core.exceptions.ExecutionFailed: Execution of 'cat /var/lib/ambari-agent/tmp/atlas_hbase_setup.rb | hbase shell -n' returned 1. ######## Hortonworks #############
> This is MOTD message, added for testing in qe infra
> atlas_titan
> ATLAS_ENTITY_AUDIT_EVENTS
> atlas
> TABLE
> ATLAS_ENTITY_AUDIT_EVENTS
> atlas_titan
> 2 row(s) in 0.2000 seconds
> nil
> TABLE
> ATLAS_ENTITY_AUDIT_EVENTS
> atlas_titan
> 2 row(s) in 0.0030 seconds
> nil
> java exception
> ERROR Java::OrgApacheHadoopHbaseIpc::RemoteWithExtrasException: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered coprocessor service found for name AccessControlService in region hbase:acl,,1480905643891.19e697cf0c4be8a99c54e39aea069b29.
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7692)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1897)
> at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1879)
> at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32299)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2141)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:187)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:167)
> {code}
> *Cause*
> When disabling Kerberos, the stack advisor recommendations are not properly applied due to the order of operations and various conditionals.
> *Solution*
> Ensure that the stack advisor recommendations are properly applied when disabling Kerberos.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)