You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Constantin (JIRA)" <ji...@apache.org> on 2017/07/03 13:53:00 UTC
[jira] [Updated] (SPARK-21288) Several files are missing in the
results of the execution of the spark application.
[ https://issues.apache.org/jira/browse/SPARK-21288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Constantin updated SPARK-21288:
-------------------------------
Description:
Spark application save into output folder not all files, for example only files from 'part-r-00101.avro' to 'part-r-00127.avro', but must be from 'part-r-0000.avro' to 'part-r-00127.avro'. It looks like all files was stored into _temporary/... but when time to move results to output folder was come, files has disappeared from _temporary. In execution logs I saw that all task was committed with FileOutputCommitter. There was not tasks preemptions and speculation.
Saving to hdfs like this:
{code:scala}
rdd
.map(v => new AvroKey[V](v) -> null)
.saveAsNewAPIHadoopFile(
directory,
classOf[AvroKey[V]],
classOf[NullWritable],
classOf[AvroKeyOutputFormat[V]],
createJob().getConfiguration
)
{code}
For files that appear in output folder, in logs there is exceptions like this:
{noformat}
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/USER/storage/dt=2017-07-02--20-03-14-415/_temporary/0/_temporary/attempt_201707022303_0011_r_000082_0/part-r-00082.avro (inode 35903648): File does not exist. Holder DFSClient_NONMAPREDUCE_-1729744390_72 does not have any open files.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3597)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3400)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3256)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
{noformat}
was:
Spark application save into output folder not all files, for example only files from 'part-r-00101.avro' to 'part-r-00127.avro', but must be from 'part-r-0000.avro' to 'part-r-00127.avro'. It looks like all files was stored into _temporary/... but when time to move results to output folder was come, files has disappeared from _temporary. In execution logs I saw that all task was committed with FileOutputCommitter. There was not tasks preemptions and speculation.
Saving to hdfs like this:
{code:scala}
rdd
.map(v => new AvroKey[V](v) -> null)
.saveAsNewAPIHadoopFile(
directory,
classOf[AvroKey[V]],
classOf[NullWritable],
classOf[AvroKeyOutputFormat[V]],
createJob().getConfiguration
)
{code}
> Several files are missing in the results of the execution of the spark application.
> -----------------------------------------------------------------------------------
>
> Key: SPARK-21288
> URL: https://issues.apache.org/jira/browse/SPARK-21288
> Project: Spark
> Issue Type: Bug
> Components: Input/Output
> Affects Versions: 1.6.0
> Environment: cloudera: Cloudera Express 5.10.0
> java: HotSpot 1.8.0_77
> spark: spark-core_2.10-1.6.0-cdh5.7.0.jar
> hadoop: 2.6.0-cdh5.7.0 from c00978c67b0d3fe9f3b896b5030741bd40bf541
> hdfs: 2.6.0-cdh5.7.0 from rc00978c67b0d3fe9f3b896b5030741bd40bf541a
> yarn: 2.6.0-cdh5.7.0 from c00978c67b0d3fe9f3b896b5030741bd40bf541a
> Reporter: Constantin
>
> Spark application save into output folder not all files, for example only files from 'part-r-00101.avro' to 'part-r-00127.avro', but must be from 'part-r-0000.avro' to 'part-r-00127.avro'. It looks like all files was stored into _temporary/... but when time to move results to output folder was come, files has disappeared from _temporary. In execution logs I saw that all task was committed with FileOutputCommitter. There was not tasks preemptions and speculation.
> Saving to hdfs like this:
> {code:scala}
> rdd
> .map(v => new AvroKey[V](v) -> null)
> .saveAsNewAPIHadoopFile(
> directory,
> classOf[AvroKey[V]],
> classOf[NullWritable],
> classOf[AvroKeyOutputFormat[V]],
> createJob().getConfiguration
> )
> {code}
> For files that appear in output folder, in logs there is exceptions like this:
> {noformat}
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/USER/storage/dt=2017-07-02--20-03-14-415/_temporary/0/_temporary/attempt_201707022303_0011_r_000082_0/part-r-00082.avro (inode 35903648): File does not exist. Holder DFSClient_NONMAPREDUCE_-1729744390_72 does not have any open files.
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3597)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:3400)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3256)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)
> at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
> at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
> {noformat}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org