You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Marius (JIRA)" <ji...@apache.org> on 2015/07/23 10:57:04 UTC

[jira] [Created] (HADOOP-12260) BlockSender.sendChunks() exception

Marius created HADOOP-12260:
-------------------------------

             Summary: BlockSender.sendChunks() exception
                 Key: HADOOP-12260
                 URL: https://issues.apache.org/jira/browse/HADOOP-12260
             Project: Hadoop Common
          Issue Type: Bug
    Affects Versions: 2.7.1, 2.6.0
         Environment: OS: CentOS Linux release 7.1.1503 (Core) 
Kernel: 3.10.0-229.1.2.el7.x86_64
            Reporter: Marius


I was running some streaming jobs with avro files from my hadoop cluster. They performed poorly so i checked the logs of my datanodes and found this:
http://pastebin.com/DXKJJ55z

The cluster is running on CentOS machines:
CentOS Linux release 7.1.1503 (Core) 
This is the Kernel:
3.10.0-229.1.2.el7.x86_64
No one one the userlist replied and i could not find anything helpful on the internet despite disk failure which is unlikely to cause this because here are several machines and its not very likely that all of their disks fail at the same time.
This error is not reported on the console when running a job and the error occurs from time to time and then dissapears and comes back again.
The block size of the cluster is the default value.

This is my command:
hadoop jar hadoop-streaming-2.7.1.jar -files mapper.py,reducer.py,avro-1.
7.7.jar,avro-mapred-1.7.7-hadoop2.jar -D mapreduce.job.reduces=15 -libjars avro-1.7.7.jar,avro-mapred-1.7.7-hadoop2.jar -input /Y/Y1.avro -output /htest/output -mapper mapper.py -reducer reducer.py -inputformat org.apache.avro.mapred.AvroAsTextInputFormat

Marius



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)