You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@tez.apache.org by "yeshavora (JIRA)" <ji...@apache.org> on 2013/07/03 00:28:21 UTC
[jira] [Updated] (TEZ-280) WordcountMrr job fails with
"java.io.EOFException"
[ https://issues.apache.org/jira/browse/TEZ-280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
yeshavora updated TEZ-280:
--------------------------
Description:
Running wordcountmrr job with tez framework and using tez-mapreduce-examples.jar fails because of "java.io.EOFException"
The Application Master log snapshot:
2013-06-24 18:06:25,777 INFO [AsyncDispatcher event handler] org.apache.tez.dag.app.dag.impl.TaskAttemptImpl: attempt_1372093551717_10_1_000001_000000_3 TaskAttempt Transitioned from RUNNING to FAIL_IN_PROGRESS
2013-06-24 18:06:25,777 INFO [AsyncDispatcher event handler] org.apache.tez.dag.history.HistoryEventHandler: [HISTORY][DAG:dag_1372093551717_10_000001][Event:TASK_ATTEMPT_FINISHED]: vertexName=ivertex1, taskAttemptId=attempt_1372093551717_10_1_000001_000000_3, finishTime=1372097185777, status=FAILED, diagnostics=Error: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at org.apache.hadoop.io.Text.readFields(Text.java:292)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:142)
at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:297)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170)
at org.apache.tez.mapreduce.processor.reduce.ReduceProcessor.runNewReducer(ReduceProcessor.java:331)
at org.apache.tez.mapreduce.processor.reduce.ReduceProcessor.process(ReduceProcessor.java:150)
at org.apache.tez.engine.task.RuntimeTask.run(RuntimeTask.java:79)
at org.apache.tez.mapreduce.task.MRRuntimeTask.run(MRRuntimeTask.java:144)
at org.apache.hadoop.mapred.YarnTezDagChild.runTezTask(YarnTezDagChild.java:334)
at org.apache.hadoop.mapred.YarnTezDagChild.access$000(YarnTezDagChild.java:76)
at org.apache.hadoop.mapred.YarnTezDagChild$2.run(YarnTezDagChild.java:178)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
at org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:175)
, counters=Counters: 33
File System Counters
FILE: BYTES_READ=265289728
FILE: BYTES_WRITTEN=574295873
FILE: READ_OPS=0
FILE: LARGE_READ_OPS=0
FILE: WRITE_OPS=0
HDFS: BYTES_READ=0
was:
Running wordcountmrr job with tez framework and using tez-mapreduce-examples.jar fails because of "java.io.EOFException"
The container log snapshot:
2013-06-24 18:06:25,777 INFO [AsyncDispatcher event handler] org.apache.tez.dag.app.dag.impl.TaskAttemptImpl: attempt_1372093551717_10_1_000001_000000_3 TaskAttempt Transitioned from RUNNING to FAIL_IN_PROGRESS
2013-06-24 18:06:25,777 INFO [AsyncDispatcher event handler] org.apache.tez.dag.history.HistoryEventHandler: [HISTORY][DAG:dag_1372093551717_10_000001][Event:TASK_ATTEMPT_FINISHED]: vertexName=ivertex1, taskAttemptId=attempt_1372093551717_10_1_000001_000000_3, finishTime=1372097185777, status=FAILED, diagnostics=Error: java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:180)
at org.apache.hadoop.io.Text.readFields(Text.java:292)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:142)
at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121)
at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:297)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170)
at org.apache.tez.mapreduce.processor.reduce.ReduceProcessor.runNewReducer(ReduceProcessor.java:331)
at org.apache.tez.mapreduce.processor.reduce.ReduceProcessor.process(ReduceProcessor.java:150)
at org.apache.tez.engine.task.RuntimeTask.run(RuntimeTask.java:79)
at org.apache.tez.mapreduce.task.MRRuntimeTask.run(MRRuntimeTask.java:144)
at org.apache.hadoop.mapred.YarnTezDagChild.runTezTask(YarnTezDagChild.java:334)
at org.apache.hadoop.mapred.YarnTezDagChild.access$000(YarnTezDagChild.java:76)
at org.apache.hadoop.mapred.YarnTezDagChild$2.run(YarnTezDagChild.java:178)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
at org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:175)
, counters=Counters: 33
File System Counters
FILE: BYTES_READ=265289728
FILE: BYTES_WRITTEN=574295873
FILE: READ_OPS=0
FILE: LARGE_READ_OPS=0
FILE: WRITE_OPS=0
HDFS: BYTES_READ=0
> WordcountMrr job fails with "java.io.EOFException"
> --------------------------------------------------
>
> Key: TEZ-280
> URL: https://issues.apache.org/jira/browse/TEZ-280
> Project: Apache Tez
> Issue Type: Bug
> Reporter: yeshavora
> Labels: TEZ-0.2.0
>
> Running wordcountmrr job with tez framework and using tez-mapreduce-examples.jar fails because of "java.io.EOFException"
> The Application Master log snapshot:
> 2013-06-24 18:06:25,777 INFO [AsyncDispatcher event handler] org.apache.tez.dag.app.dag.impl.TaskAttemptImpl: attempt_1372093551717_10_1_000001_000000_3 TaskAttempt Transitioned from RUNNING to FAIL_IN_PROGRESS
> 2013-06-24 18:06:25,777 INFO [AsyncDispatcher event handler] org.apache.tez.dag.history.HistoryEventHandler: [HISTORY][DAG:dag_1372093551717_10_000001][Event:TASK_ATTEMPT_FINISHED]: vertexName=ivertex1, taskAttemptId=attempt_1372093551717_10_1_000001_000000_3, finishTime=1372097185777, status=FAILED, diagnostics=Error: java.io.EOFException
> at java.io.DataInputStream.readFully(DataInputStream.java:180)
> at org.apache.hadoop.io.Text.readFields(Text.java:292)
> at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
> at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
> at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:142)
> at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121)
> at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:297)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170)
> at org.apache.tez.mapreduce.processor.reduce.ReduceProcessor.runNewReducer(ReduceProcessor.java:331)
> at org.apache.tez.mapreduce.processor.reduce.ReduceProcessor.process(ReduceProcessor.java:150)
> at org.apache.tez.engine.task.RuntimeTask.run(RuntimeTask.java:79)
> at org.apache.tez.mapreduce.task.MRRuntimeTask.run(MRRuntimeTask.java:144)
> at org.apache.hadoop.mapred.YarnTezDagChild.runTezTask(YarnTezDagChild.java:334)
> at org.apache.hadoop.mapred.YarnTezDagChild.access$000(YarnTezDagChild.java:76)
> at org.apache.hadoop.mapred.YarnTezDagChild$2.run(YarnTezDagChild.java:178)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1477)
> at org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:175)
> , counters=Counters: 33
> File System Counters
> FILE: BYTES_READ=265289728
> FILE: BYTES_WRITTEN=574295873
> FILE: READ_OPS=0
> FILE: LARGE_READ_OPS=0
> FILE: WRITE_OPS=0
> HDFS: BYTES_READ=0
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira