You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Neil Conway <ne...@gmail.com> on 2009/05/06 07:18:25 UTC

Tasks killed with "Filesystem closed"

I'm seeing tasks killed, with the following exception in the log:

2009-05-06 04:58:20,799 WARN org.apache.hadoop.mapred.TaskTracker:
Error running child
java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:197)
        at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1575)
        at java.io.FilterInputStream.close(FilterInputStream.java:155)
        at org.apache.hadoop.util.LineReader.close(LineReader.java:91)
        at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:169)
        at org.apache.hadoop.hive.ql.io.HiveRecordReader.close(HiveRecordReader.java:36)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:198)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:346)
        at org.apache.hadoop.mapred.Child.main(Child.java:158)
2009-05-06 04:58:20,802 WARN org.apache.hadoop.mapred.TaskRunner:
Parent died.  Exiting attempt_200905060443_0003_m_000006_1

Most of the killed tasks are maps, but I sometimes see this for
reduces as well. Has anyone else seen this problem? The job still
completes, but nearly every job has at least 1 or more task that gets
killed in this fashion.

Hadoop 0.19.1, Hive r763879 (long story). I'm running this on a 10
node cluster. I'm storing the metastore in Derby in network server
mode and running a bunch of queries concurrently; each node is fairly
heavily loaded (load factor > 4 typically).

Any input would be very welcome.

Thanks,

Neil

Re: Tasks killed with "Filesystem closed"

Posted by Neil Conway <ne...@gmail.com>.
On Tue, May 5, 2009 at 10:21 PM, Prasad Chakka <pc...@facebook.com> wrote:
> Are these speculative execution maps?

Yep, that looks correct. So presumably this isn't worth worrying
about, then? Avoiding the ugly exception would be nice, at least.

Neil

Re: Tasks killed with "Filesystem closed"

Posted by Prasad Chakka <pc...@facebook.com>.
Are these speculative execution maps?


________________________________
From: Neil Conway <ne...@gmail.com>
Reply-To: <hi...@hadoop.apache.org>
Date: Tue, 5 May 2009 22:18:25 -0700
To: <hi...@hadoop.apache.org>
Subject: Tasks killed with "Filesystem closed"

I'm seeing tasks killed, with the following exception in the log:

2009-05-06 04:58:20,799 WARN org.apache.hadoop.mapred.TaskTracker:
Error running child
java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:197)
        at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
        at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.close(DFSClient.java:1575)
        at java.io.FilterInputStream.close(FilterInputStream.java:155)
        at org.apache.hadoop.util.LineReader.close(LineReader.java:91)
        at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:169)
        at org.apache.hadoop.hive.ql.io.HiveRecordReader.close(HiveRecordReader.java:36)
        at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:198)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:346)
        at org.apache.hadoop.mapred.Child.main(Child.java:158)
2009-05-06 04:58:20,802 WARN org.apache.hadoop.mapred.TaskRunner:
Parent died.  Exiting attempt_200905060443_0003_m_000006_1

Most of the killed tasks are maps, but I sometimes see this for
reduces as well. Has anyone else seen this problem? The job still
completes, but nearly every job has at least 1 or more task that gets
killed in this fashion.

Hadoop 0.19.1, Hive r763879 (long story). I'm running this on a 10
node cluster. I'm storing the metastore in Derby in network server
mode and running a bunch of queries concurrently; each node is fairly
heavily loaded (load factor > 4 typically).

Any input would be very welcome.

Thanks,

Neil