You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Karim Awara <ka...@kaust.edu.sa> on 2014/04/17 12:58:05 UTC

Task or job tracker seems not working?

Hi,

I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
working normally however, when I ran a mapreduce job, it gives an error:

Java.io.IOException: Bad connect act with firstBadLink

although I have all the processes up..




--
Best Regards,
Karim Ahmed Awara

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Shumin Guo <gs...@gmail.com>.
The error message tells that you are using local FS rather than HDFS. So,
you need to make sure your HDFS cluster is up and running before running
any mapreduce jobs. For example, you can use fsck or other hdfs commands to
test if the HDFS cluster is running ok.


On Thu, Apr 17, 2014 at 8:51 AM, Karim Awara <ka...@kaust.edu.sa>wrote:

> hi,
>
> I still dont think the problem with the file size or so. This is a snap of
> the job tracker log just as i start hadoop. It seems it has an exception
> already.
>
> 2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
> JobTracker: Setting safe mode to false. Requested by : karim
> 2014-04-17 16:16:31,400 INFO org.apache.hadoop.util.NativeCodeLoader:
> Loaded the native-hadoop library
> 2014-04-17 16:16:31,429 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2014-04-17 16:16:31,443 INFO org.apache.hadoop.mapred.JobHistory: Creating
> DONE folder at file:/home/karim/systems/hadoop-1.1.2/logs/history/done
> 2014-04-17 16:16:31,445 INFO org.apache.hadoop.mapred.JobTracker: History
> server being initialized in embedded mode
> 2014-04-17 16:16:31,447 INFO org.apache.hadoop.mapred.JobHistoryServer:
> Started job history server at: localhost:50030
> 2014-04-17 16:16:31,448 INFO org.apache.hadoop.mapred.JobTracker: Job
> History Server web address: localhost:50030
> 2014-04-17 16:16:31,449 INFO
> org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
> inactive
> 2014-04-17 16:16:31,474 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
> /home/karim/data/hadoop-1.1.2_data/hadoop_tmp-karim/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:23 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>>
>> If cluster is undergoing some networking issue, it means that HDFS
>> shouldn't be working as well right? The cluster is quite free for my job
>> with high specification 18GB memory each machine, quad core.
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>>
>> On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>
>>> you need to allocate sufficient memory to datanodes as well.
>>> Also make sure that none of the network cards on your datanodes have
>>> turned bad.
>>>
>>> Most of the time the error you saw comes when there is heavy utilization
>>> of cluser or it is undergoing some kind of network issue.
>>>
>>>
>>> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>
>>>>
>>>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>>>> the mapreduce job is quite small (few hundred megabytes) as well.
>>>>
>>>> --
>>>> Best Regards,
>>>> Karim Ahmed Awara
>>>>
>>>>
>>>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>>>
>>>>> Can you tell us JVM memory allocated to all data nodes?
>>>>>
>>>>>
>>>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <karim.awara@kaust.edu.sa
>>>>> > wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>>>
>>>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>>>
>>>>>> although I have all the processes up..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Karim Ahmed Awara
>>>>>>
>>>>>> ------------------------------
>>>>>> This message and its contents, including attachments are intended
>>>>>> solely for the original recipient. If you are not the intended recipient or
>>>>>> have received this message in error, please notify me immediately and
>>>>>> delete this message from your computer system. Any unauthorized use or
>>>>>> distribution is prohibited. Please consider the environment before printing
>>>>>> this email.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nitin Pawar
>>>>>
>>>>
>>>>
>>>> ------------------------------
>>>> This message and its contents, including attachments are intended
>>>> solely for the original recipient. If you are not the intended recipient or
>>>> have received this message in error, please notify me immediately and
>>>> delete this message from your computer system. Any unauthorized use or
>>>> distribution is prohibited. Please consider the environment before printing
>>>> this email.
>>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>

Re: Task or job tracker seems not working?

Posted by Shumin Guo <gs...@gmail.com>.
The error message tells that you are using local FS rather than HDFS. So,
you need to make sure your HDFS cluster is up and running before running
any mapreduce jobs. For example, you can use fsck or other hdfs commands to
test if the HDFS cluster is running ok.


On Thu, Apr 17, 2014 at 8:51 AM, Karim Awara <ka...@kaust.edu.sa>wrote:

> hi,
>
> I still dont think the problem with the file size or so. This is a snap of
> the job tracker log just as i start hadoop. It seems it has an exception
> already.
>
> 2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
> JobTracker: Setting safe mode to false. Requested by : karim
> 2014-04-17 16:16:31,400 INFO org.apache.hadoop.util.NativeCodeLoader:
> Loaded the native-hadoop library
> 2014-04-17 16:16:31,429 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2014-04-17 16:16:31,443 INFO org.apache.hadoop.mapred.JobHistory: Creating
> DONE folder at file:/home/karim/systems/hadoop-1.1.2/logs/history/done
> 2014-04-17 16:16:31,445 INFO org.apache.hadoop.mapred.JobTracker: History
> server being initialized in embedded mode
> 2014-04-17 16:16:31,447 INFO org.apache.hadoop.mapred.JobHistoryServer:
> Started job history server at: localhost:50030
> 2014-04-17 16:16:31,448 INFO org.apache.hadoop.mapred.JobTracker: Job
> History Server web address: localhost:50030
> 2014-04-17 16:16:31,449 INFO
> org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
> inactive
> 2014-04-17 16:16:31,474 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
> /home/karim/data/hadoop-1.1.2_data/hadoop_tmp-karim/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:23 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>>
>> If cluster is undergoing some networking issue, it means that HDFS
>> shouldn't be working as well right? The cluster is quite free for my job
>> with high specification 18GB memory each machine, quad core.
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>>
>> On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>
>>> you need to allocate sufficient memory to datanodes as well.
>>> Also make sure that none of the network cards on your datanodes have
>>> turned bad.
>>>
>>> Most of the time the error you saw comes when there is heavy utilization
>>> of cluser or it is undergoing some kind of network issue.
>>>
>>>
>>> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>
>>>>
>>>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>>>> the mapreduce job is quite small (few hundred megabytes) as well.
>>>>
>>>> --
>>>> Best Regards,
>>>> Karim Ahmed Awara
>>>>
>>>>
>>>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>>>
>>>>> Can you tell us JVM memory allocated to all data nodes?
>>>>>
>>>>>
>>>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <karim.awara@kaust.edu.sa
>>>>> > wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>>>
>>>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>>>
>>>>>> although I have all the processes up..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Karim Ahmed Awara
>>>>>>
>>>>>> ------------------------------
>>>>>> This message and its contents, including attachments are intended
>>>>>> solely for the original recipient. If you are not the intended recipient or
>>>>>> have received this message in error, please notify me immediately and
>>>>>> delete this message from your computer system. Any unauthorized use or
>>>>>> distribution is prohibited. Please consider the environment before printing
>>>>>> this email.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nitin Pawar
>>>>>
>>>>
>>>>
>>>> ------------------------------
>>>> This message and its contents, including attachments are intended
>>>> solely for the original recipient. If you are not the intended recipient or
>>>> have received this message in error, please notify me immediately and
>>>> delete this message from your computer system. Any unauthorized use or
>>>> distribution is prohibited. Please consider the environment before printing
>>>> this email.
>>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>

Re: Task or job tracker seems not working?

Posted by Shumin Guo <gs...@gmail.com>.
The error message tells that you are using local FS rather than HDFS. So,
you need to make sure your HDFS cluster is up and running before running
any mapreduce jobs. For example, you can use fsck or other hdfs commands to
test if the HDFS cluster is running ok.


On Thu, Apr 17, 2014 at 8:51 AM, Karim Awara <ka...@kaust.edu.sa>wrote:

> hi,
>
> I still dont think the problem with the file size or so. This is a snap of
> the job tracker log just as i start hadoop. It seems it has an exception
> already.
>
> 2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
> JobTracker: Setting safe mode to false. Requested by : karim
> 2014-04-17 16:16:31,400 INFO org.apache.hadoop.util.NativeCodeLoader:
> Loaded the native-hadoop library
> 2014-04-17 16:16:31,429 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2014-04-17 16:16:31,443 INFO org.apache.hadoop.mapred.JobHistory: Creating
> DONE folder at file:/home/karim/systems/hadoop-1.1.2/logs/history/done
> 2014-04-17 16:16:31,445 INFO org.apache.hadoop.mapred.JobTracker: History
> server being initialized in embedded mode
> 2014-04-17 16:16:31,447 INFO org.apache.hadoop.mapred.JobHistoryServer:
> Started job history server at: localhost:50030
> 2014-04-17 16:16:31,448 INFO org.apache.hadoop.mapred.JobTracker: Job
> History Server web address: localhost:50030
> 2014-04-17 16:16:31,449 INFO
> org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
> inactive
> 2014-04-17 16:16:31,474 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
> /home/karim/data/hadoop-1.1.2_data/hadoop_tmp-karim/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:23 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>>
>> If cluster is undergoing some networking issue, it means that HDFS
>> shouldn't be working as well right? The cluster is quite free for my job
>> with high specification 18GB memory each machine, quad core.
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>>
>> On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>
>>> you need to allocate sufficient memory to datanodes as well.
>>> Also make sure that none of the network cards on your datanodes have
>>> turned bad.
>>>
>>> Most of the time the error you saw comes when there is heavy utilization
>>> of cluser or it is undergoing some kind of network issue.
>>>
>>>
>>> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>
>>>>
>>>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>>>> the mapreduce job is quite small (few hundred megabytes) as well.
>>>>
>>>> --
>>>> Best Regards,
>>>> Karim Ahmed Awara
>>>>
>>>>
>>>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>>>
>>>>> Can you tell us JVM memory allocated to all data nodes?
>>>>>
>>>>>
>>>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <karim.awara@kaust.edu.sa
>>>>> > wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>>>
>>>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>>>
>>>>>> although I have all the processes up..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Karim Ahmed Awara
>>>>>>
>>>>>> ------------------------------
>>>>>> This message and its contents, including attachments are intended
>>>>>> solely for the original recipient. If you are not the intended recipient or
>>>>>> have received this message in error, please notify me immediately and
>>>>>> delete this message from your computer system. Any unauthorized use or
>>>>>> distribution is prohibited. Please consider the environment before printing
>>>>>> this email.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nitin Pawar
>>>>>
>>>>
>>>>
>>>> ------------------------------
>>>> This message and its contents, including attachments are intended
>>>> solely for the original recipient. If you are not the intended recipient or
>>>> have received this message in error, please notify me immediately and
>>>> delete this message from your computer system. Any unauthorized use or
>>>> distribution is prohibited. Please consider the environment before printing
>>>> this email.
>>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>

Re: Task or job tracker seems not working?

Posted by Shumin Guo <gs...@gmail.com>.
The error message tells that you are using local FS rather than HDFS. So,
you need to make sure your HDFS cluster is up and running before running
any mapreduce jobs. For example, you can use fsck or other hdfs commands to
test if the HDFS cluster is running ok.


On Thu, Apr 17, 2014 at 8:51 AM, Karim Awara <ka...@kaust.edu.sa>wrote:

> hi,
>
> I still dont think the problem with the file size or so. This is a snap of
> the job tracker log just as i start hadoop. It seems it has an exception
> already.
>
> 2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
> JobTracker: Setting safe mode to false. Requested by : karim
> 2014-04-17 16:16:31,400 INFO org.apache.hadoop.util.NativeCodeLoader:
> Loaded the native-hadoop library
> 2014-04-17 16:16:31,429 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2014-04-17 16:16:31,443 INFO org.apache.hadoop.mapred.JobHistory: Creating
> DONE folder at file:/home/karim/systems/hadoop-1.1.2/logs/history/done
> 2014-04-17 16:16:31,445 INFO org.apache.hadoop.mapred.JobTracker: History
> server being initialized in embedded mode
> 2014-04-17 16:16:31,447 INFO org.apache.hadoop.mapred.JobHistoryServer:
> Started job history server at: localhost:50030
> 2014-04-17 16:16:31,448 INFO org.apache.hadoop.mapred.JobTracker: Job
> History Server web address: localhost:50030
> 2014-04-17 16:16:31,449 INFO
> org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
> inactive
> 2014-04-17 16:16:31,474 WARN org.apache.hadoop.hdfs.DFSClient:
> DataStreamer Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException: File
> /home/karim/data/hadoop-1.1.2_data/hadoop_tmp-karim/mapred/system/
> jobtracker.info could only be replicated to 0 nodes, instead of 1
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:396)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
>
>
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:23 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>>
>> If cluster is undergoing some networking issue, it means that HDFS
>> shouldn't be working as well right? The cluster is quite free for my job
>> with high specification 18GB memory each machine, quad core.
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>>
>> On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>
>>> you need to allocate sufficient memory to datanodes as well.
>>> Also make sure that none of the network cards on your datanodes have
>>> turned bad.
>>>
>>> Most of the time the error you saw comes when there is heavy utilization
>>> of cluser or it is undergoing some kind of network issue.
>>>
>>>
>>> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>
>>>>
>>>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>>>> the mapreduce job is quite small (few hundred megabytes) as well.
>>>>
>>>> --
>>>> Best Regards,
>>>> Karim Ahmed Awara
>>>>
>>>>
>>>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>>>
>>>>> Can you tell us JVM memory allocated to all data nodes?
>>>>>
>>>>>
>>>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <karim.awara@kaust.edu.sa
>>>>> > wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>>>
>>>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>>>
>>>>>> although I have all the processes up..
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Karim Ahmed Awara
>>>>>>
>>>>>> ------------------------------
>>>>>> This message and its contents, including attachments are intended
>>>>>> solely for the original recipient. If you are not the intended recipient or
>>>>>> have received this message in error, please notify me immediately and
>>>>>> delete this message from your computer system. Any unauthorized use or
>>>>>> distribution is prohibited. Please consider the environment before printing
>>>>>> this email.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nitin Pawar
>>>>>
>>>>
>>>>
>>>> ------------------------------
>>>> This message and its contents, including attachments are intended
>>>> solely for the original recipient. If you are not the intended recipient or
>>>> have received this message in error, please notify me immediately and
>>>> delete this message from your computer system. Any unauthorized use or
>>>> distribution is prohibited. Please consider the environment before printing
>>>> this email.
>>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
hi,

I still dont think the problem with the file size or so. This is a snap of
the job tracker log just as i start hadoop. It seems it has an exception
already.

2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
JobTracker: Setting safe mode to false. Requested by : karim
2014-04-17 16:16:31,400 INFO org.apache.hadoop.util.NativeCodeLoader:
Loaded the native-hadoop library
2014-04-17 16:16:31,429 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
up the system directory
2014-04-17 16:16:31,443 INFO org.apache.hadoop.mapred.JobHistory: Creating
DONE folder at file:/home/karim/systems/hadoop-1.1.2/logs/history/done
2014-04-17 16:16:31,445 INFO org.apache.hadoop.mapred.JobTracker: History
server being initialized in embedded mode
2014-04-17 16:16:31,447 INFO org.apache.hadoop.mapred.JobHistoryServer:
Started job history server at: localhost:50030
2014-04-17 16:16:31,448 INFO org.apache.hadoop.mapred.JobTracker: Job
History Server web address: localhost:50030
2014-04-17 16:16:31,449 INFO
org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
inactive
2014-04-17 16:16:31,474 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/home/karim/data/hadoop-1.1.2_data/hadoop_tmp-karim/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)



--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:23 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

>
> If cluster is undergoing some networking issue, it means that HDFS
> shouldn't be working as well right? The cluster is quite free for my job
> with high specification 18GB memory each machine, quad core.
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
>> you need to allocate sufficient memory to datanodes as well.
>> Also make sure that none of the network cards on your datanodes have
>> turned bad.
>>
>> Most of the time the error you saw comes when there is heavy utilization
>> of cluser or it is undergoing some kind of network issue.
>>
>>
>> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>
>>>
>>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>>> the mapreduce job is quite small (few hundred megabytes) as well.
>>>
>>> --
>>> Best Regards,
>>> Karim Ahmed Awara
>>>
>>>
>>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>>
>>>> Can you tell us JVM memory allocated to all data nodes?
>>>>
>>>>
>>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>>
>>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>>
>>>>> although I have all the processes up..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Karim Ahmed Awara
>>>>>
>>>>> ------------------------------
>>>>> This message and its contents, including attachments are intended
>>>>> solely for the original recipient. If you are not the intended recipient or
>>>>> have received this message in error, please notify me immediately and
>>>>> delete this message from your computer system. Any unauthorized use or
>>>>> distribution is prohibited. Please consider the environment before printing
>>>>> this email.
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Nitin Pawar
>>>>
>>>
>>>
>>> ------------------------------
>>> This message and its contents, including attachments are intended solely
>>> for the original recipient. If you are not the intended recipient or have
>>> received this message in error, please notify me immediately and delete
>>> this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
hi,

I still dont think the problem with the file size or so. This is a snap of
the job tracker log just as i start hadoop. It seems it has an exception
already.

2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
JobTracker: Setting safe mode to false. Requested by : karim
2014-04-17 16:16:31,400 INFO org.apache.hadoop.util.NativeCodeLoader:
Loaded the native-hadoop library
2014-04-17 16:16:31,429 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
up the system directory
2014-04-17 16:16:31,443 INFO org.apache.hadoop.mapred.JobHistory: Creating
DONE folder at file:/home/karim/systems/hadoop-1.1.2/logs/history/done
2014-04-17 16:16:31,445 INFO org.apache.hadoop.mapred.JobTracker: History
server being initialized in embedded mode
2014-04-17 16:16:31,447 INFO org.apache.hadoop.mapred.JobHistoryServer:
Started job history server at: localhost:50030
2014-04-17 16:16:31,448 INFO org.apache.hadoop.mapred.JobTracker: Job
History Server web address: localhost:50030
2014-04-17 16:16:31,449 INFO
org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
inactive
2014-04-17 16:16:31,474 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/home/karim/data/hadoop-1.1.2_data/hadoop_tmp-karim/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)



--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:23 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

>
> If cluster is undergoing some networking issue, it means that HDFS
> shouldn't be working as well right? The cluster is quite free for my job
> with high specification 18GB memory each machine, quad core.
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
>> you need to allocate sufficient memory to datanodes as well.
>> Also make sure that none of the network cards on your datanodes have
>> turned bad.
>>
>> Most of the time the error you saw comes when there is heavy utilization
>> of cluser or it is undergoing some kind of network issue.
>>
>>
>> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>
>>>
>>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>>> the mapreduce job is quite small (few hundred megabytes) as well.
>>>
>>> --
>>> Best Regards,
>>> Karim Ahmed Awara
>>>
>>>
>>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>>
>>>> Can you tell us JVM memory allocated to all data nodes?
>>>>
>>>>
>>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>>
>>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>>
>>>>> although I have all the processes up..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Karim Ahmed Awara
>>>>>
>>>>> ------------------------------
>>>>> This message and its contents, including attachments are intended
>>>>> solely for the original recipient. If you are not the intended recipient or
>>>>> have received this message in error, please notify me immediately and
>>>>> delete this message from your computer system. Any unauthorized use or
>>>>> distribution is prohibited. Please consider the environment before printing
>>>>> this email.
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Nitin Pawar
>>>>
>>>
>>>
>>> ------------------------------
>>> This message and its contents, including attachments are intended solely
>>> for the original recipient. If you are not the intended recipient or have
>>> received this message in error, please notify me immediately and delete
>>> this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
hi,

I still dont think the problem with the file size or so. This is a snap of
the job tracker log just as i start hadoop. It seems it has an exception
already.

2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
JobTracker: Setting safe mode to false. Requested by : karim
2014-04-17 16:16:31,400 INFO org.apache.hadoop.util.NativeCodeLoader:
Loaded the native-hadoop library
2014-04-17 16:16:31,429 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
up the system directory
2014-04-17 16:16:31,443 INFO org.apache.hadoop.mapred.JobHistory: Creating
DONE folder at file:/home/karim/systems/hadoop-1.1.2/logs/history/done
2014-04-17 16:16:31,445 INFO org.apache.hadoop.mapred.JobTracker: History
server being initialized in embedded mode
2014-04-17 16:16:31,447 INFO org.apache.hadoop.mapred.JobHistoryServer:
Started job history server at: localhost:50030
2014-04-17 16:16:31,448 INFO org.apache.hadoop.mapred.JobTracker: Job
History Server web address: localhost:50030
2014-04-17 16:16:31,449 INFO
org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
inactive
2014-04-17 16:16:31,474 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/home/karim/data/hadoop-1.1.2_data/hadoop_tmp-karim/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)



--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:23 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

>
> If cluster is undergoing some networking issue, it means that HDFS
> shouldn't be working as well right? The cluster is quite free for my job
> with high specification 18GB memory each machine, quad core.
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
>> you need to allocate sufficient memory to datanodes as well.
>> Also make sure that none of the network cards on your datanodes have
>> turned bad.
>>
>> Most of the time the error you saw comes when there is heavy utilization
>> of cluser or it is undergoing some kind of network issue.
>>
>>
>> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>
>>>
>>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>>> the mapreduce job is quite small (few hundred megabytes) as well.
>>>
>>> --
>>> Best Regards,
>>> Karim Ahmed Awara
>>>
>>>
>>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>>
>>>> Can you tell us JVM memory allocated to all data nodes?
>>>>
>>>>
>>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>>
>>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>>
>>>>> although I have all the processes up..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Karim Ahmed Awara
>>>>>
>>>>> ------------------------------
>>>>> This message and its contents, including attachments are intended
>>>>> solely for the original recipient. If you are not the intended recipient or
>>>>> have received this message in error, please notify me immediately and
>>>>> delete this message from your computer system. Any unauthorized use or
>>>>> distribution is prohibited. Please consider the environment before printing
>>>>> this email.
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Nitin Pawar
>>>>
>>>
>>>
>>> ------------------------------
>>> This message and its contents, including attachments are intended solely
>>> for the original recipient. If you are not the intended recipient or have
>>> received this message in error, please notify me immediately and delete
>>> this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
hi,

I still dont think the problem with the file size or so. This is a snap of
the job tracker log just as i start hadoop. It seems it has an exception
already.

2014-04-17 16:16:31,391 INFO org.apache.hadoop.mapred.
JobTracker: Setting safe mode to false. Requested by : karim
2014-04-17 16:16:31,400 INFO org.apache.hadoop.util.NativeCodeLoader:
Loaded the native-hadoop library
2014-04-17 16:16:31,429 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
up the system directory
2014-04-17 16:16:31,443 INFO org.apache.hadoop.mapred.JobHistory: Creating
DONE folder at file:/home/karim/systems/hadoop-1.1.2/logs/history/done
2014-04-17 16:16:31,445 INFO org.apache.hadoop.mapred.JobTracker: History
server being initialized in embedded mode
2014-04-17 16:16:31,447 INFO org.apache.hadoop.mapred.JobHistoryServer:
Started job history server at: localhost:50030
2014-04-17 16:16:31,448 INFO org.apache.hadoop.mapred.JobTracker: Job
History Server web address: localhost:50030
2014-04-17 16:16:31,449 INFO
org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job store is
inactive
2014-04-17 16:16:31,474 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/home/karim/data/hadoop-1.1.2_data/hadoop_tmp-karim/mapred/system/
jobtracker.info could only be replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1639)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:736)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)



--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:23 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

>
> If cluster is undergoing some networking issue, it means that HDFS
> shouldn't be working as well right? The cluster is quite free for my job
> with high specification 18GB memory each machine, quad core.
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
>> you need to allocate sufficient memory to datanodes as well.
>> Also make sure that none of the network cards on your datanodes have
>> turned bad.
>>
>> Most of the time the error you saw comes when there is heavy utilization
>> of cluser or it is undergoing some kind of network issue.
>>
>>
>> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>
>>>
>>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>>> the mapreduce job is quite small (few hundred megabytes) as well.
>>>
>>> --
>>> Best Regards,
>>> Karim Ahmed Awara
>>>
>>>
>>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>>
>>>> Can you tell us JVM memory allocated to all data nodes?
>>>>
>>>>
>>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>>
>>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>>
>>>>> although I have all the processes up..
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Karim Ahmed Awara
>>>>>
>>>>> ------------------------------
>>>>> This message and its contents, including attachments are intended
>>>>> solely for the original recipient. If you are not the intended recipient or
>>>>> have received this message in error, please notify me immediately and
>>>>> delete this message from your computer system. Any unauthorized use or
>>>>> distribution is prohibited. Please consider the environment before printing
>>>>> this email.
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Nitin Pawar
>>>>
>>>
>>>
>>> ------------------------------
>>> This message and its contents, including attachments are intended solely
>>> for the original recipient. If you are not the intended recipient or have
>>> received this message in error, please notify me immediately and delete
>>> this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
If cluster is undergoing some networking issue, it means that HDFS
shouldn't be working as well right? The cluster is quite free for my job
with high specification 18GB memory each machine, quad core.

--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:

> you need to allocate sufficient memory to datanodes as well.
> Also make sure that none of the network cards on your datanodes have
> turned bad.
>
> Most of the time the error you saw comes when there is heavy utilization
> of cluser or it is undergoing some kind of network issue.
>
>
> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>>
>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>> the mapreduce job is quite small (few hundred megabytes) as well.
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>>
>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>
>>> Can you tell us JVM memory allocated to all data nodes?
>>>
>>>
>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>
>>>> Hi,
>>>>
>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>
>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>
>>>> although I have all the processes up..
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Karim Ahmed Awara
>>>>
>>>> ------------------------------
>>>> This message and its contents, including attachments are intended
>>>> solely for the original recipient. If you are not the intended recipient or
>>>> have received this message in error, please notify me immediately and
>>>> delete this message from your computer system. Any unauthorized use or
>>>> distribution is prohibited. Please consider the environment before printing
>>>> this email.
>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>> ------------------------------
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>>
>
>
>
> --
> Nitin Pawar
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
If cluster is undergoing some networking issue, it means that HDFS
shouldn't be working as well right? The cluster is quite free for my job
with high specification 18GB memory each machine, quad core.

--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:

> you need to allocate sufficient memory to datanodes as well.
> Also make sure that none of the network cards on your datanodes have
> turned bad.
>
> Most of the time the error you saw comes when there is heavy utilization
> of cluser or it is undergoing some kind of network issue.
>
>
> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>>
>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>> the mapreduce job is quite small (few hundred megabytes) as well.
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>>
>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>
>>> Can you tell us JVM memory allocated to all data nodes?
>>>
>>>
>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>
>>>> Hi,
>>>>
>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>
>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>
>>>> although I have all the processes up..
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Karim Ahmed Awara
>>>>
>>>> ------------------------------
>>>> This message and its contents, including attachments are intended
>>>> solely for the original recipient. If you are not the intended recipient or
>>>> have received this message in error, please notify me immediately and
>>>> delete this message from your computer system. Any unauthorized use or
>>>> distribution is prohibited. Please consider the environment before printing
>>>> this email.
>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>> ------------------------------
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>>
>
>
>
> --
> Nitin Pawar
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
If cluster is undergoing some networking issue, it means that HDFS
shouldn't be working as well right? The cluster is quite free for my job
with high specification 18GB memory each machine, quad core.

--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:

> you need to allocate sufficient memory to datanodes as well.
> Also make sure that none of the network cards on your datanodes have
> turned bad.
>
> Most of the time the error you saw comes when there is heavy utilization
> of cluser or it is undergoing some kind of network issue.
>
>
> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>>
>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>> the mapreduce job is quite small (few hundred megabytes) as well.
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>>
>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>
>>> Can you tell us JVM memory allocated to all data nodes?
>>>
>>>
>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>
>>>> Hi,
>>>>
>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>
>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>
>>>> although I have all the processes up..
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Karim Ahmed Awara
>>>>
>>>> ------------------------------
>>>> This message and its contents, including attachments are intended
>>>> solely for the original recipient. If you are not the intended recipient or
>>>> have received this message in error, please notify me immediately and
>>>> delete this message from your computer system. Any unauthorized use or
>>>> distribution is prohibited. Please consider the environment before printing
>>>> this email.
>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>> ------------------------------
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>>
>
>
>
> --
> Nitin Pawar
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
If cluster is undergoing some networking issue, it means that HDFS
shouldn't be working as well right? The cluster is quite free for my job
with high specification 18GB memory each machine, quad core.

--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:18 PM, Nitin Pawar <ni...@gmail.com>wrote:

> you need to allocate sufficient memory to datanodes as well.
> Also make sure that none of the network cards on your datanodes have
> turned bad.
>
> Most of the time the error you saw comes when there is heavy utilization
> of cluser or it is undergoing some kind of network issue.
>
>
> On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>>
>> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
>> the mapreduce job is quite small (few hundred megabytes) as well.
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>>
>> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>>
>>> Can you tell us JVM memory allocated to all data nodes?
>>>
>>>
>>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>>
>>>> Hi,
>>>>
>>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>>
>>>> Java.io.IOException: Bad connect act with firstBadLink
>>>>
>>>> although I have all the processes up..
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Karim Ahmed Awara
>>>>
>>>> ------------------------------
>>>> This message and its contents, including attachments are intended
>>>> solely for the original recipient. If you are not the intended recipient or
>>>> have received this message in error, please notify me immediately and
>>>> delete this message from your computer system. Any unauthorized use or
>>>> distribution is prohibited. Please consider the environment before printing
>>>> this email.
>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>> ------------------------------
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>>
>
>
>
> --
> Nitin Pawar
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Nitin Pawar <ni...@gmail.com>.
you need to allocate sufficient memory to datanodes as well.
Also make sure that none of the network cards on your datanodes have turned
bad.

Most of the time the error you saw comes when there is heavy utilization of
cluser or it is undergoing some kind of network issue.


On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

>
> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
> the mapreduce job is quite small (few hundred megabytes) as well.
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
>> Can you tell us JVM memory allocated to all data nodes?
>>
>>
>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>
>>> Hi,
>>>
>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>
>>> Java.io.IOException: Bad connect act with firstBadLink
>>>
>>> although I have all the processes up..
>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Karim Ahmed Awara
>>>
>>> ------------------------------
>>> This message and its contents, including attachments are intended solely
>>> for the original recipient. If you are not the intended recipient or have
>>> received this message in error, please notify me immediately and delete
>>> this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>



-- 
Nitin Pawar

Re: Task or job tracker seems not working?

Posted by Nitin Pawar <ni...@gmail.com>.
you need to allocate sufficient memory to datanodes as well.
Also make sure that none of the network cards on your datanodes have turned
bad.

Most of the time the error you saw comes when there is heavy utilization of
cluser or it is undergoing some kind of network issue.


On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

>
> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
> the mapreduce job is quite small (few hundred megabytes) as well.
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
>> Can you tell us JVM memory allocated to all data nodes?
>>
>>
>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>
>>> Hi,
>>>
>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>
>>> Java.io.IOException: Bad connect act with firstBadLink
>>>
>>> although I have all the processes up..
>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Karim Ahmed Awara
>>>
>>> ------------------------------
>>> This message and its contents, including attachments are intended solely
>>> for the original recipient. If you are not the intended recipient or have
>>> received this message in error, please notify me immediately and delete
>>> this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>



-- 
Nitin Pawar

Re: Task or job tracker seems not working?

Posted by Nitin Pawar <ni...@gmail.com>.
you need to allocate sufficient memory to datanodes as well.
Also make sure that none of the network cards on your datanodes have turned
bad.

Most of the time the error you saw comes when there is heavy utilization of
cluser or it is undergoing some kind of network issue.


On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

>
> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
> the mapreduce job is quite small (few hundred megabytes) as well.
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
>> Can you tell us JVM memory allocated to all data nodes?
>>
>>
>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>
>>> Hi,
>>>
>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>
>>> Java.io.IOException: Bad connect act with firstBadLink
>>>
>>> although I have all the processes up..
>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Karim Ahmed Awara
>>>
>>> ------------------------------
>>> This message and its contents, including attachments are intended solely
>>> for the original recipient. If you are not the intended recipient or have
>>> received this message in error, please notify me immediately and delete
>>> this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>



-- 
Nitin Pawar

Re: Task or job tracker seems not working?

Posted by Nitin Pawar <ni...@gmail.com>.
you need to allocate sufficient memory to datanodes as well.
Also make sure that none of the network cards on your datanodes have turned
bad.

Most of the time the error you saw comes when there is heavy utilization of
cluser or it is undergoing some kind of network issue.


On Thu, Apr 17, 2014 at 4:37 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

>
> im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for
> the mapreduce job is quite small (few hundred megabytes) as well.
>
> --
> Best Regards,
> Karim Ahmed Awara
>
>
> On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
>> Can you tell us JVM memory allocated to all data nodes?
>>
>>
>> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>>
>>> Hi,
>>>
>>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>>> working normally however, when I ran a mapreduce job, it gives an error:
>>>
>>> Java.io.IOException: Bad connect act with firstBadLink
>>>
>>> although I have all the processes up..
>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Karim Ahmed Awara
>>>
>>> ------------------------------
>>> This message and its contents, including attachments are intended solely
>>> for the original recipient. If you are not the intended recipient or have
>>> received this message in error, please notify me immediately and delete
>>> this message from your computer system. Any unauthorized use or
>>> distribution is prohibited. Please consider the environment before printing
>>> this email.
>>
>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.
>



-- 
Nitin Pawar

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for the
mapreduce job is quite small (few hundred megabytes) as well.

--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:

> Can you tell us JVM memory allocated to all data nodes?
>
>
> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>> Hi,
>>
>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>> working normally however, when I ran a mapreduce job, it gives an error:
>>
>> Java.io.IOException: Bad connect act with firstBadLink
>>
>> although I have all the processes up..
>>
>>
>>
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>> ------------------------------
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>
>
>
>
> --
> Nitin Pawar
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for the
mapreduce job is quite small (few hundred megabytes) as well.

--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:

> Can you tell us JVM memory allocated to all data nodes?
>
>
> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>> Hi,
>>
>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>> working normally however, when I ran a mapreduce job, it gives an error:
>>
>> Java.io.IOException: Bad connect act with firstBadLink
>>
>> although I have all the processes up..
>>
>>
>>
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>> ------------------------------
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>
>
>
>
> --
> Nitin Pawar
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for the
mapreduce job is quite small (few hundred megabytes) as well.

--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:

> Can you tell us JVM memory allocated to all data nodes?
>
>
> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>> Hi,
>>
>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>> working normally however, when I ran a mapreduce job, it gives an error:
>>
>> Java.io.IOException: Bad connect act with firstBadLink
>>
>> although I have all the processes up..
>>
>>
>>
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>> ------------------------------
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>
>
>
>
> --
> Nitin Pawar
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Karim Awara <ka...@kaust.edu.sa>.
im setting  mapred.child.java.opts to Xmx8G.    my dataset im using for the
mapreduce job is quite small (few hundred megabytes) as well.

--
Best Regards,
Karim Ahmed Awara


On Thu, Apr 17, 2014 at 2:01 PM, Nitin Pawar <ni...@gmail.com>wrote:

> Can you tell us JVM memory allocated to all data nodes?
>
>
> On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:
>
>> Hi,
>>
>> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
>> working normally however, when I ran a mapreduce job, it gives an error:
>>
>> Java.io.IOException: Bad connect act with firstBadLink
>>
>> although I have all the processes up..
>>
>>
>>
>>
>> --
>> Best Regards,
>> Karim Ahmed Awara
>>
>> ------------------------------
>> This message and its contents, including attachments are intended solely
>> for the original recipient. If you are not the intended recipient or have
>> received this message in error, please notify me immediately and delete
>> this message from your computer system. Any unauthorized use or
>> distribution is prohibited. Please consider the environment before printing
>> this email.
>
>
>
>
> --
> Nitin Pawar
>

-- 

------------------------------
This message and its contents, including attachments are intended solely 
for the original recipient. If you are not the intended recipient or have 
received this message in error, please notify me immediately and delete 
this message from your computer system. Any unauthorized use or 
distribution is prohibited. Please consider the environment before printing 
this email.

Re: Task or job tracker seems not working?

Posted by Nitin Pawar <ni...@gmail.com>.
Can you tell us JVM memory allocated to all data nodes?


On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

> Hi,
>
> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
> working normally however, when I ran a mapreduce job, it gives an error:
>
> Java.io.IOException: Bad connect act with firstBadLink
>
> although I have all the processes up..
>
>
>
>
> --
> Best Regards,
> Karim Ahmed Awara
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.




-- 
Nitin Pawar

Re: Task or job tracker seems not working?

Posted by Nitin Pawar <ni...@gmail.com>.
Can you tell us JVM memory allocated to all data nodes?


On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

> Hi,
>
> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
> working normally however, when I ran a mapreduce job, it gives an error:
>
> Java.io.IOException: Bad connect act with firstBadLink
>
> although I have all the processes up..
>
>
>
>
> --
> Best Regards,
> Karim Ahmed Awara
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.




-- 
Nitin Pawar

Re: Task or job tracker seems not working?

Posted by Nitin Pawar <ni...@gmail.com>.
Can you tell us JVM memory allocated to all data nodes?


On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

> Hi,
>
> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
> working normally however, when I ran a mapreduce job, it gives an error:
>
> Java.io.IOException: Bad connect act with firstBadLink
>
> although I have all the processes up..
>
>
>
>
> --
> Best Regards,
> Karim Ahmed Awara
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.




-- 
Nitin Pawar

Re: Task or job tracker seems not working?

Posted by Nitin Pawar <ni...@gmail.com>.
Can you tell us JVM memory allocated to all data nodes?


On Thu, Apr 17, 2014 at 4:28 PM, Karim Awara <ka...@kaust.edu.sa>wrote:

> Hi,
>
> I am running a mpreduce job on a cluster of 16 machines.  The HDFS is
> working normally however, when I ran a mapreduce job, it gives an error:
>
> Java.io.IOException: Bad connect act with firstBadLink
>
> although I have all the processes up..
>
>
>
>
> --
> Best Regards,
> Karim Ahmed Awara
>
> ------------------------------
> This message and its contents, including attachments are intended solely
> for the original recipient. If you are not the intended recipient or have
> received this message in error, please notify me immediately and delete
> this message from your computer system. Any unauthorized use or
> distribution is prohibited. Please consider the environment before printing
> this email.




-- 
Nitin Pawar