You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@falcon.apache.org by "Sowmya Ramesh (JIRA)" <ji...@apache.org> on 2016/07/01 01:02:10 UTC

[jira] [Assigned] (FALCON-2046) HDFS Replication failing in secure Mode

     [ https://issues.apache.org/jira/browse/FALCON-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sowmya Ramesh reassigned FALCON-2046:
-------------------------------------

    Assignee: Sowmya Ramesh

> HDFS Replication failing in secure Mode
> ---------------------------------------
>
>                 Key: FALCON-2046
>                 URL: https://issues.apache.org/jira/browse/FALCON-2046
>             Project: Falcon
>          Issue Type: Bug
>          Components: replication
>    Affects Versions: 0.10
>            Reporter: Murali Ramasami
>            Assignee: Sowmya Ramesh
>            Priority: Critical
>             Fix For: trunk, 0.10
>
>
> HDFS Replication failing in secure Mode with the Authentication required error.
> Scenario:
> HDFS replication from single source to single target
> Extension property file
> {noformat}
> [hrt_qa@nat-os-r6-upns-falcon-multicluster-14 hadoopqe]$ cat /tmp/falcon-extension/HdfsExtensionTesta1b85962.properties
> #
> # Licensed to the Apache Software Foundation (ASF) under one
> # or more contributor license agreements.  See the NOTICE file
> # distributed with this work for additional information
> # regarding copyright ownership.  The ASF licenses this file
> # to you under the Apache License, Version 2.0 (the
> # "License"); you may not use this file except in compliance
> # with the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> ##### NOTE: This is a TEMPLATE file which can be copied and edited
> jobName = HdfsExtensionTesta1b85962
> jobClusterName = Aa39cd108-b6c6e9ff
> jobValidityStart = 2016-06-21T04:27Z
> jobValidityEnd = 2016-06-21T04:47Z
> jobFrequency = days(1)
> sourceCluster = Aa39cd108-b6c6e9ff
> sourceDir = /tmp/falcon-regression/HdfsExtensionTest/HdfsDR/source
> targetCluster = Aa39cd108-38a7a9cc
> targetDir = /tmp/falcon-regression/HdfsExtensionTest/HdfsDR/target
> jobAclOwner = hrt_qa
> jobAclGroup = users
> jobAclPermission = *
> extensionName = hdfs-mirroring
> jobProcessFrequency = minutes(5)
> {noformat}
> Please see the application log below:
> {noformat}
> =================================================================
> >>> Invoking Main class now >>>
> Fetching child yarn jobs
> tag id : oozie-62d207ec7d2c61db9dd3220d0fda7c22
> Child yarn jobs are found -
> Main class        : org.apache.falcon.replication.FeedReplicator
> Arguments         :
>                     -Dmapred.job.queue.name=default
>                     -Dmapred.job.priority=NORMAL
>                     -maxMaps
>                     1
>                     -mapBandwidth
>                     100
>                     -sourcePaths
>                     webhdfs://nat-os-r6-upns-falcon-multicluster-14.openstacklocal:20070/tmp/falcon-regression/HdfsExtensionTest/HdfsDR/source
>                     -targetPath
>                     hdfs://nat-os-r6-upns-falcon-multicluster-10.openstacklocal:8020/tmp/falcon-regression/HdfsExtensionTest/HdfsDR/target
>                     -falconFeedStorageType
>                     FILESYSTEM
>                     -availabilityFlag
>                     NA
>                     -counterLogDir
>                     hdfs://nat-os-r6-upns-falcon-multicluster-14.openstacklocal:8020/tmp/fs/falcon/workflows/process/HdfsExtensionTesta1b85962/logs/job-2016-06-21-04-27/
> <<< Invocation of Main class completed <<<
> Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.JavaMain], main() threw exception, org.apache.hadoop.security.AccessControlException: Authentication required
> org.apache.oozie.action.hadoop.JavaMainException: org.apache.hadoop.security.AccessControlException: Authentication required
>         at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:59)
>         at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:51)
>         at org.apache.oozie.action.hadoop.JavaMain.main(JavaMain.java:35)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:242)
>         at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
>         at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
>         at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>         at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> Caused by: org.apache.hadoop.security.AccessControlException: Authentication required
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:457)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:113)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:738)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:582)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:612)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:608)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:1505)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getDelegationToken(WebHdfsFileSystem.java:331)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getAuthParameters(WebHdfsFileSystem.java:547)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.toUrl(WebHdfsFileSystem.java:568)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractFsPathRunner.getUrl(WebHdfsFileSystem.java:838)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:733)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:582)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:612)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:608)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:985)
>         at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:1001)
>         at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
>         at org.apache.hadoop.fs.Globber.glob(Globber.java:252)
>         at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1666)
>         at org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:77)
>         at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:86)
>         at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:390)
>         at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:188)
>         at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
>         at org.apache.falcon.replication.FeedReplicator.run(FeedReplicator.java:97)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>         at org.apache.falcon.replication.FeedReplicator.main(FeedReplicator.java:62)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.oozie.action.hadoop.JavaMain.run(JavaMain.java:56)
>         ... 15 more
> Oozie Launcher failed, finishing Hadoop job gracefully
> Oozie Launcher, uploading action data to HDFS sequence file: hdfs://nat-os-r6-upns-falcon-multicluster-14.openstacklocal:8020/user/hrt_qa/oozie-oozi/0000089-160621033852265-oozie-oozi-W/dr-replication--java/action-data.seq
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)