You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@apex.apache.org by "David Yan (JIRA)" <ji...@apache.org> on 2016/02/24 01:44:18 UTC

[jira] [Commented] (APEXCORE-45) Certain HDFS calls from Apex give NPE with Hadoop 2.7.x

    [ https://issues.apache.org/jira/browse/APEXCORE-45?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15159973#comment-15159973 ] 

David Yan commented on APEXCORE-45:
-----------------------------------

I just tried installing DataTorrent RTS Community Edition 3.2.0 on vanilla Apache Hadoop 2.7.2 (latest version) and it still gives the same error on the name node.  HDFS was formatted fresh before the installation.

> Certain HDFS calls from Apex give NPE with Hadoop 2.7.x
> -------------------------------------------------------
>
>                 Key: APEXCORE-45
>                 URL: https://issues.apache.org/jira/browse/APEXCORE-45
>             Project: Apache Apex Core
>          Issue Type: Bug
>            Reporter: David Yan
>            Assignee: David Yan
>
> How to reproduce:
> - install apache hadoop 2.7.x
> - install RTS 3.0.0 community edition as root
> - On the hadoop installation screen in the install wizard, for the DFS root directory field, enter a directory that does not exist, whose parent dtadmin does not have write access to.  For example: /user/root/datatorrent.
> Exception is thrown on the namenode:
> {noformat}
> java.lang.NullPointerException
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkOwner(FSPermissionChecker.java:247)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:227)
>         at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1698)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1682)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1651)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:61)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1653)
>         at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:693)
>         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:453)
>         at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
> {noformat}
> binhnv80@gmail.com on the apex-dev mailing list also mentioned there is another exception in STRAM when FSStorageAgent tries to create a file in HDFS:
> {noformat}
> ERROR com.datatorrent.stram.StreamingAppMaster: Exiting Application Master
> java.lang.NullPointerException
>         at org.apache.hadoop.fs.AbstractFileSystem.create(AbstractFileSystem.java:551)
>         at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:686)
>         at org.apache.hadoop.fs.FileContext$3.next(FileContext.java:682)
>         at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
>         at org.apache.hadoop.fs.FileContext.create(FileContext.java:682)
>         at com.datatorrent.common.util.FSStorageAgent.save(FSStorageAgent.java:92)
>         at com.datatorrent.stram.plan.physical.PhysicalPlan.initCheckpoint(PhysicalPlan.java:944)
>         at com.datatorrent.stram.plan.physical.PhysicalPlan.<init>(PhysicalPlan.java:363)
>         at com.datatorrent.stram.StreamingContainerManager.<init>(StreamingContainerManager.java:330)
>         at com.datatorrent.stram.StreamingContainerManager.getInstance(StreamingContainerManager.java:2828)
>         at com.datatorrent.stram.StreamingAppMasterService.serviceInit(StreamingAppMasterService.java:516)
>         at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>         at com.datatorrent.stram.StreamingAppMaster.main(StreamingAppMaster.java:98)
> {noformat}
> I am not able to reproduce the second exception.
> Here's the link to the google group email thread:
> https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/apex-dev/CxZN-QtR5BE



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)