You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "Kartashov, Andy" <An...@mpac.ca> on 2012/11/09 18:36:55 UTC

Erro running pi programm

Yinghua,

What mode are you running your hadoop in: Local/Pseud/Fully...?

Your hostname is not recognised

Your configuration setting seems to be wrong.





Hi, all

Could some help looking at this problem? I am setting up a four node cluster on EC2 and seems that the cluster is set up fine until I start testing.

I have tried password-less ssh from each node to all the nodes and there is no problem connecting. Any advice will be greatly appreciated!

[hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -Dmapreduce.clientfactory.class.name<http://Dmapreduce.clientfactory.class.name>=org.apache.hadoop.mapred.YarnClientFactory -libjars share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
Number of Maps  = 16
Samples per Map = 10000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Starting Job
12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to process : 16
12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files
12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
12/11/09 12:02:59 WARN conf.Configuration: mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name<http://mapred.job.name> is deprecated. Instead, use mapreduce.job.name<http://mapreduce.job.name>
12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application application_1352478937343_0002 to ResourceManager at master/10.12.181.233:60400<http://10.12.181.233:60400>
12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1352478937343_0002/
12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running in uber mode : false
12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed with state FAILED due to: Application application_1352478937343_0002 failed 1 times due to Error launching appattempt_1352478937343_0002_000001. Got exception: java.lang.reflect.UndeclaredThrowableException
        at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
        at org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)
Caused by: com.google.protobuf.ServiceException: java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For more details see:  http://wiki.apache.org/hadoop/UnknownHost
        at org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
        at $Proxy24.startContainer(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
        ... 5 more
Caused by: java.net.UnknownHostException: Yinghua For more details see:  http://wiki.apache.org/hadoop/UnknownHost
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
        at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
        at org.apache.hadoop.ipc.Client.call(Client.java:1068)
        at org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
        ... 7 more
Caused by: java.net.UnknownHostException
        ... 11 more
. Failing the application.
12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
Job Finished in 2.672 seconds
java.io.FileNotFoundException: File does not exist: hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
        at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
        at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
        at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:208)


--
Regards,

Yinghua

NOTICE: This e-mail message and any attachments are confidential, subject to copyright and may be privileged. Any unauthorized use, copying or disclosure is prohibited. If you are not the intended recipient, please delete and contact the sender immediately. Please consider the environment before printing this e-mail. AVIS : le pr?sent courriel et toute pi?ce jointe qui l'accompagne sont confidentiels, prot?g?s par le droit d'auteur et peuvent ?tre couverts par le secret professionnel. Toute utilisation, copie ou divulgation non autoris?e est interdite. Si vous n'?tes pas le destinataire pr?vu de ce courriel, supprimez-le et contactez imm?diatement l'exp?diteur. Veuillez penser ? l'environnement avant d'imprimer le pr?sent courriel

Re: Erro running pi programm

Posted by yinghua hu <yi...@gmail.com>.
Here are my configuration files

core-site.xml

<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://master:9000</value>
        </property>
        <property>

                <name>hadoop.tmp.dir</name>

                <value>/usr/local/hadoop/tmp</value>

        </property>

</configuration>


mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>

</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
         <property>

                <name>dfs.permissions</name>

                <value>false</value>

        </property>

        <property>

                <name>dfs.namenode.name.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/namenode</value>

        </property>

        <property>

                <name>dfs.datanode.data.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/datanode</value>

        </property>

</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
 <property>
 <name>yarn.nodemanager.aux-services</name>

  <value>mapreduce.shuffle</value>
 </property>
 <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
        <name>yarn.nodemanager.log-aggregation-enable</name>
        <value>true</value>
 </property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8050</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.class</name>


<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>


</property>

<property>

    <name>yarn.resourcemanager.address</name>

   <value>master:60400</value>

 </property>



</configuration>

On Fri, Nov 9, 2012 at 9:51 AM, yinghua hu <yi...@gmail.com> wrote:

> Hi, Andy
>
> Thanks for suggestions!
>
> I am running it on a four node cluster on EC2. All the services started
> fine, Namenode, Datanode, ResourceManager, NodeManager and
> JobHistoryServer. Each node can ssh to all the nodes without problem.
>
> But problem appears when trying to run any job.
>
>
>
>
> On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <An...@mpac.ca>wrote:
>
>>   Yinghua,
>>
>>
>>
>> What mode are you running your hadoop in: Local/Pseud/Fully...?
>>
>>
>>
>> Your hostname is not recognised
>>
>>
>>
>> Your configuration setting seems to be wrong.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Hi, all
>>
>>
>>
>> Could some help looking at this problem? I am setting up a four node
>> cluster on EC2 and seems that the cluster is set up fine until I start
>> testing.
>>
>>
>>
>> I have tried password-less ssh from each node to all the nodes and there
>> is no problem connecting. Any advice will be greatly appreciated!
>>
>>
>>
>> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
>> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
>> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
>> -libjars
>> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>>
>> Number of Maps  = 16
>>
>> Samples per Map = 10000
>>
>> Wrote input for Map #0
>>
>> Wrote input for Map #1
>>
>> Wrote input for Map #2
>>
>> Wrote input for Map #3
>>
>> Wrote input for Map #4
>>
>> Wrote input for Map #5
>>
>> Wrote input for Map #6
>>
>> Wrote input for Map #7
>>
>> Wrote input for Map #8
>>
>> Wrote input for Map #9
>>
>> Wrote input for Map #10
>>
>> Wrote input for Map #11
>>
>> Wrote input for Map #12
>>
>> Wrote input for Map #13
>>
>> Wrote input for Map #14
>>
>> Wrote input for Map #15
>>
>> Starting Job
>>
>> 12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to
>> process : 16
>>
>> 12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is
>> deprecated. Instead, use mapreduce.job.classpath.files
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated.
>> Instead, use mapreduce.job.jar
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is
>> deprecated. Instead, use mapreduce.job.cache.files
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.map.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.map.speculative
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is
>> deprecated. Instead, use mapreduce.job.output.value.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.used.genericoptionsparser is deprecated. Instead, use
>> mapreduce.client.genericoptionsparser.used
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is
>> deprecated. Instead, use mapreduce.job.map.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name is
>> deprecated. Instead, use mapreduce.job.name
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is
>> deprecated. Instead, use mapreduce.job.reduce.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is
>> deprecated. Instead, use mapreduce.job.inputformat.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is
>> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is
>> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class
>> is deprecated. Instead, use mapreduce.job.outputformat.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is
>> deprecated. Instead, use mapreduce.job.maps
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps
>> is deprecated. Instead, use mapreduce.job.cache.files.timestamps
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is
>> deprecated. Instead, use mapreduce.job.output.key.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is
>> deprecated. Instead, use mapreduce.job.working.dir
>>
>> 12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application
>> application_1352478937343_0002 to ResourceManager at master/
>> 10.12.181.233:60400
>>
>> 12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job:
>> http://master:8088/proxy/application_1352478937343_0002/
>>
>> 12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running
>> in uber mode : false
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed
>> with state FAILED due to: Application application_1352478937343_0002 failed
>> 1 times due to Error launching appattempt_1352478937343_0002_000001. Got
>> exception: java.lang.reflect.UndeclaredThrowableException
>>
>>         at
>> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
>>
>>         at
>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
>>
>>         at
>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>
>>         at java.lang.Thread.run(Thread.java:722)
>>
>> Caused by: com.google.protobuf.ServiceException:
>> java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For
>> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>>
>>         at
>> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
>>
>>         at $Proxy24.startContainer(Unknown Source)
>>
>>         at
>> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
>>
>>         ... 5 more
>>
>> Caused by: java.net.UnknownHostException: Yinghua For more details see:
>> http://wiki.apache.org/hadoop/UnknownHost
>>
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
>>
>>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
>>
>>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
>>
>>         at org.apache.hadoop.ipc.Client.call(Client.java:1068)
>>
>>         at
>> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
>>
>>         ... 7 more
>>
>> Caused by: java.net.UnknownHostException
>>
>>         ... 11 more
>>
>> . Failing the application.
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
>>
>> Job Finished in 2.672 seconds
>>
>> java.io.FileNotFoundException: File does not exist:
>> hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
>>
>>         at
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
>>
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
>>
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>>
>>
>>
>>
>>
>> --
>> Regards,
>>
>> Yinghua
>>
>>
>>   NOTICE: This e-mail message and any attachments are confidential,
>> subject to copyright and may be privileged. Any unauthorized use, copying
>> or disclosure is prohibited. If you are not the intended recipient, please
>> delete and contact the sender immediately. Please consider the environment
>> before printing this e-mail. AVIS : le présent courriel et toute pièce
>> jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur
>> et peuvent être couverts par le secret professionnel. Toute utilisation,
>> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
>> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
>> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
>> courriel
>>
>
>
>
> --
> Regards,
>
> Yinghua
>



-- 
Regards,

Yinghua

Re: Erro running pi programm

Posted by yinghua hu <yi...@gmail.com>.
Here are my configuration files

core-site.xml

<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://master:9000</value>
        </property>
        <property>

                <name>hadoop.tmp.dir</name>

                <value>/usr/local/hadoop/tmp</value>

        </property>

</configuration>


mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>

</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
         <property>

                <name>dfs.permissions</name>

                <value>false</value>

        </property>

        <property>

                <name>dfs.namenode.name.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/namenode</value>

        </property>

        <property>

                <name>dfs.datanode.data.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/datanode</value>

        </property>

</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
 <property>
 <name>yarn.nodemanager.aux-services</name>

  <value>mapreduce.shuffle</value>
 </property>
 <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
        <name>yarn.nodemanager.log-aggregation-enable</name>
        <value>true</value>
 </property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8050</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.class</name>


<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>


</property>

<property>

    <name>yarn.resourcemanager.address</name>

   <value>master:60400</value>

 </property>



</configuration>

On Fri, Nov 9, 2012 at 9:51 AM, yinghua hu <yi...@gmail.com> wrote:

> Hi, Andy
>
> Thanks for suggestions!
>
> I am running it on a four node cluster on EC2. All the services started
> fine, Namenode, Datanode, ResourceManager, NodeManager and
> JobHistoryServer. Each node can ssh to all the nodes without problem.
>
> But problem appears when trying to run any job.
>
>
>
>
> On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <An...@mpac.ca>wrote:
>
>>   Yinghua,
>>
>>
>>
>> What mode are you running your hadoop in: Local/Pseud/Fully...?
>>
>>
>>
>> Your hostname is not recognised
>>
>>
>>
>> Your configuration setting seems to be wrong.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Hi, all
>>
>>
>>
>> Could some help looking at this problem? I am setting up a four node
>> cluster on EC2 and seems that the cluster is set up fine until I start
>> testing.
>>
>>
>>
>> I have tried password-less ssh from each node to all the nodes and there
>> is no problem connecting. Any advice will be greatly appreciated!
>>
>>
>>
>> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
>> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
>> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
>> -libjars
>> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>>
>> Number of Maps  = 16
>>
>> Samples per Map = 10000
>>
>> Wrote input for Map #0
>>
>> Wrote input for Map #1
>>
>> Wrote input for Map #2
>>
>> Wrote input for Map #3
>>
>> Wrote input for Map #4
>>
>> Wrote input for Map #5
>>
>> Wrote input for Map #6
>>
>> Wrote input for Map #7
>>
>> Wrote input for Map #8
>>
>> Wrote input for Map #9
>>
>> Wrote input for Map #10
>>
>> Wrote input for Map #11
>>
>> Wrote input for Map #12
>>
>> Wrote input for Map #13
>>
>> Wrote input for Map #14
>>
>> Wrote input for Map #15
>>
>> Starting Job
>>
>> 12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to
>> process : 16
>>
>> 12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is
>> deprecated. Instead, use mapreduce.job.classpath.files
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated.
>> Instead, use mapreduce.job.jar
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is
>> deprecated. Instead, use mapreduce.job.cache.files
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.map.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.map.speculative
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is
>> deprecated. Instead, use mapreduce.job.output.value.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.used.genericoptionsparser is deprecated. Instead, use
>> mapreduce.client.genericoptionsparser.used
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is
>> deprecated. Instead, use mapreduce.job.map.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name is
>> deprecated. Instead, use mapreduce.job.name
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is
>> deprecated. Instead, use mapreduce.job.reduce.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is
>> deprecated. Instead, use mapreduce.job.inputformat.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is
>> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is
>> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class
>> is deprecated. Instead, use mapreduce.job.outputformat.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is
>> deprecated. Instead, use mapreduce.job.maps
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps
>> is deprecated. Instead, use mapreduce.job.cache.files.timestamps
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is
>> deprecated. Instead, use mapreduce.job.output.key.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is
>> deprecated. Instead, use mapreduce.job.working.dir
>>
>> 12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application
>> application_1352478937343_0002 to ResourceManager at master/
>> 10.12.181.233:60400
>>
>> 12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job:
>> http://master:8088/proxy/application_1352478937343_0002/
>>
>> 12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running
>> in uber mode : false
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed
>> with state FAILED due to: Application application_1352478937343_0002 failed
>> 1 times due to Error launching appattempt_1352478937343_0002_000001. Got
>> exception: java.lang.reflect.UndeclaredThrowableException
>>
>>         at
>> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
>>
>>         at
>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
>>
>>         at
>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>
>>         at java.lang.Thread.run(Thread.java:722)
>>
>> Caused by: com.google.protobuf.ServiceException:
>> java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For
>> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>>
>>         at
>> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
>>
>>         at $Proxy24.startContainer(Unknown Source)
>>
>>         at
>> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
>>
>>         ... 5 more
>>
>> Caused by: java.net.UnknownHostException: Yinghua For more details see:
>> http://wiki.apache.org/hadoop/UnknownHost
>>
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
>>
>>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
>>
>>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
>>
>>         at org.apache.hadoop.ipc.Client.call(Client.java:1068)
>>
>>         at
>> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
>>
>>         ... 7 more
>>
>> Caused by: java.net.UnknownHostException
>>
>>         ... 11 more
>>
>> . Failing the application.
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
>>
>> Job Finished in 2.672 seconds
>>
>> java.io.FileNotFoundException: File does not exist:
>> hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
>>
>>         at
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
>>
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
>>
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>>
>>
>>
>>
>>
>> --
>> Regards,
>>
>> Yinghua
>>
>>
>>   NOTICE: This e-mail message and any attachments are confidential,
>> subject to copyright and may be privileged. Any unauthorized use, copying
>> or disclosure is prohibited. If you are not the intended recipient, please
>> delete and contact the sender immediately. Please consider the environment
>> before printing this e-mail. AVIS : le présent courriel et toute pièce
>> jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur
>> et peuvent être couverts par le secret professionnel. Toute utilisation,
>> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
>> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
>> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
>> courriel
>>
>
>
>
> --
> Regards,
>
> Yinghua
>



-- 
Regards,

Yinghua

Re: Erro running pi programm

Posted by yinghua hu <yi...@gmail.com>.
Here are my configuration files

core-site.xml

<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://master:9000</value>
        </property>
        <property>

                <name>hadoop.tmp.dir</name>

                <value>/usr/local/hadoop/tmp</value>

        </property>

</configuration>


mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>

</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
         <property>

                <name>dfs.permissions</name>

                <value>false</value>

        </property>

        <property>

                <name>dfs.namenode.name.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/namenode</value>

        </property>

        <property>

                <name>dfs.datanode.data.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/datanode</value>

        </property>

</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
 <property>
 <name>yarn.nodemanager.aux-services</name>

  <value>mapreduce.shuffle</value>
 </property>
 <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
        <name>yarn.nodemanager.log-aggregation-enable</name>
        <value>true</value>
 </property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8050</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.class</name>


<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>


</property>

<property>

    <name>yarn.resourcemanager.address</name>

   <value>master:60400</value>

 </property>



</configuration>

On Fri, Nov 9, 2012 at 9:51 AM, yinghua hu <yi...@gmail.com> wrote:

> Hi, Andy
>
> Thanks for suggestions!
>
> I am running it on a four node cluster on EC2. All the services started
> fine, Namenode, Datanode, ResourceManager, NodeManager and
> JobHistoryServer. Each node can ssh to all the nodes without problem.
>
> But problem appears when trying to run any job.
>
>
>
>
> On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <An...@mpac.ca>wrote:
>
>>   Yinghua,
>>
>>
>>
>> What mode are you running your hadoop in: Local/Pseud/Fully...?
>>
>>
>>
>> Your hostname is not recognised
>>
>>
>>
>> Your configuration setting seems to be wrong.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Hi, all
>>
>>
>>
>> Could some help looking at this problem? I am setting up a four node
>> cluster on EC2 and seems that the cluster is set up fine until I start
>> testing.
>>
>>
>>
>> I have tried password-less ssh from each node to all the nodes and there
>> is no problem connecting. Any advice will be greatly appreciated!
>>
>>
>>
>> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
>> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
>> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
>> -libjars
>> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>>
>> Number of Maps  = 16
>>
>> Samples per Map = 10000
>>
>> Wrote input for Map #0
>>
>> Wrote input for Map #1
>>
>> Wrote input for Map #2
>>
>> Wrote input for Map #3
>>
>> Wrote input for Map #4
>>
>> Wrote input for Map #5
>>
>> Wrote input for Map #6
>>
>> Wrote input for Map #7
>>
>> Wrote input for Map #8
>>
>> Wrote input for Map #9
>>
>> Wrote input for Map #10
>>
>> Wrote input for Map #11
>>
>> Wrote input for Map #12
>>
>> Wrote input for Map #13
>>
>> Wrote input for Map #14
>>
>> Wrote input for Map #15
>>
>> Starting Job
>>
>> 12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to
>> process : 16
>>
>> 12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is
>> deprecated. Instead, use mapreduce.job.classpath.files
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated.
>> Instead, use mapreduce.job.jar
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is
>> deprecated. Instead, use mapreduce.job.cache.files
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.map.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.map.speculative
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is
>> deprecated. Instead, use mapreduce.job.output.value.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.used.genericoptionsparser is deprecated. Instead, use
>> mapreduce.client.genericoptionsparser.used
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is
>> deprecated. Instead, use mapreduce.job.map.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name is
>> deprecated. Instead, use mapreduce.job.name
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is
>> deprecated. Instead, use mapreduce.job.reduce.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is
>> deprecated. Instead, use mapreduce.job.inputformat.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is
>> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is
>> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class
>> is deprecated. Instead, use mapreduce.job.outputformat.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is
>> deprecated. Instead, use mapreduce.job.maps
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps
>> is deprecated. Instead, use mapreduce.job.cache.files.timestamps
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is
>> deprecated. Instead, use mapreduce.job.output.key.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is
>> deprecated. Instead, use mapreduce.job.working.dir
>>
>> 12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application
>> application_1352478937343_0002 to ResourceManager at master/
>> 10.12.181.233:60400
>>
>> 12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job:
>> http://master:8088/proxy/application_1352478937343_0002/
>>
>> 12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running
>> in uber mode : false
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed
>> with state FAILED due to: Application application_1352478937343_0002 failed
>> 1 times due to Error launching appattempt_1352478937343_0002_000001. Got
>> exception: java.lang.reflect.UndeclaredThrowableException
>>
>>         at
>> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
>>
>>         at
>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
>>
>>         at
>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>
>>         at java.lang.Thread.run(Thread.java:722)
>>
>> Caused by: com.google.protobuf.ServiceException:
>> java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For
>> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>>
>>         at
>> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
>>
>>         at $Proxy24.startContainer(Unknown Source)
>>
>>         at
>> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
>>
>>         ... 5 more
>>
>> Caused by: java.net.UnknownHostException: Yinghua For more details see:
>> http://wiki.apache.org/hadoop/UnknownHost
>>
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
>>
>>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
>>
>>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
>>
>>         at org.apache.hadoop.ipc.Client.call(Client.java:1068)
>>
>>         at
>> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
>>
>>         ... 7 more
>>
>> Caused by: java.net.UnknownHostException
>>
>>         ... 11 more
>>
>> . Failing the application.
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
>>
>> Job Finished in 2.672 seconds
>>
>> java.io.FileNotFoundException: File does not exist:
>> hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
>>
>>         at
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
>>
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
>>
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>>
>>
>>
>>
>>
>> --
>> Regards,
>>
>> Yinghua
>>
>>
>>   NOTICE: This e-mail message and any attachments are confidential,
>> subject to copyright and may be privileged. Any unauthorized use, copying
>> or disclosure is prohibited. If you are not the intended recipient, please
>> delete and contact the sender immediately. Please consider the environment
>> before printing this e-mail. AVIS : le présent courriel et toute pièce
>> jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur
>> et peuvent être couverts par le secret professionnel. Toute utilisation,
>> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
>> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
>> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
>> courriel
>>
>
>
>
> --
> Regards,
>
> Yinghua
>



-- 
Regards,

Yinghua

Re: Erro running pi programm

Posted by yinghua hu <yi...@gmail.com>.
Here are my configuration files

core-site.xml

<configuration>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://master:9000</value>
        </property>
        <property>

                <name>hadoop.tmp.dir</name>

                <value>/usr/local/hadoop/tmp</value>

        </property>

</configuration>


mapred-site.xml

<configuration>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>

</configuration>

hdfs-site.xml

<configuration>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
         <property>

                <name>dfs.permissions</name>

                <value>false</value>

        </property>

        <property>

                <name>dfs.namenode.name.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/namenode</value>

        </property>

        <property>

                <name>dfs.datanode.data.dir</name>

                <value>file:/home/hduser/yarn_data/hdfs/datanode</value>

        </property>

</configuration>

yarn-site.xml

<?xml version="1.0"?>
<configuration>
 <property>
 <name>yarn.nodemanager.aux-services</name>

  <value>mapreduce.shuffle</value>
 </property>
 <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
 </property>
 <property>
        <name>yarn.nodemanager.log-aggregation-enable</name>
        <value>true</value>
 </property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8050</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
</property>
<property>
        <name>yarn.resourcemanager.scheduler.class</name>


<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>


</property>

<property>

    <name>yarn.resourcemanager.address</name>

   <value>master:60400</value>

 </property>



</configuration>

On Fri, Nov 9, 2012 at 9:51 AM, yinghua hu <yi...@gmail.com> wrote:

> Hi, Andy
>
> Thanks for suggestions!
>
> I am running it on a four node cluster on EC2. All the services started
> fine, Namenode, Datanode, ResourceManager, NodeManager and
> JobHistoryServer. Each node can ssh to all the nodes without problem.
>
> But problem appears when trying to run any job.
>
>
>
>
> On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <An...@mpac.ca>wrote:
>
>>   Yinghua,
>>
>>
>>
>> What mode are you running your hadoop in: Local/Pseud/Fully...?
>>
>>
>>
>> Your hostname is not recognised
>>
>>
>>
>> Your configuration setting seems to be wrong.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Hi, all
>>
>>
>>
>> Could some help looking at this problem? I am setting up a four node
>> cluster on EC2 and seems that the cluster is set up fine until I start
>> testing.
>>
>>
>>
>> I have tried password-less ssh from each node to all the nodes and there
>> is no problem connecting. Any advice will be greatly appreciated!
>>
>>
>>
>> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
>> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
>> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
>> -libjars
>> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>>
>> Number of Maps  = 16
>>
>> Samples per Map = 10000
>>
>> Wrote input for Map #0
>>
>> Wrote input for Map #1
>>
>> Wrote input for Map #2
>>
>> Wrote input for Map #3
>>
>> Wrote input for Map #4
>>
>> Wrote input for Map #5
>>
>> Wrote input for Map #6
>>
>> Wrote input for Map #7
>>
>> Wrote input for Map #8
>>
>> Wrote input for Map #9
>>
>> Wrote input for Map #10
>>
>> Wrote input for Map #11
>>
>> Wrote input for Map #12
>>
>> Wrote input for Map #13
>>
>> Wrote input for Map #14
>>
>> Wrote input for Map #15
>>
>> Starting Job
>>
>> 12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to
>> process : 16
>>
>> 12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is
>> deprecated. Instead, use mapreduce.job.classpath.files
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated.
>> Instead, use mapreduce.job.jar
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is
>> deprecated. Instead, use mapreduce.job.cache.files
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.map.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.map.speculative
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is
>> deprecated. Instead, use mapreduce.job.reduces
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is
>> deprecated. Instead, use mapreduce.job.output.value.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
>> mapreduce.reduce.speculative
>>
>> 12/11/09 12:02:59 WARN conf.Configuration:
>> mapred.used.genericoptionsparser is deprecated. Instead, use
>> mapreduce.client.genericoptionsparser.used
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is
>> deprecated. Instead, use mapreduce.job.map.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name is
>> deprecated. Instead, use mapreduce.job.name
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is
>> deprecated. Instead, use mapreduce.job.reduce.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is
>> deprecated. Instead, use mapreduce.job.inputformat.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is
>> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is
>> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class
>> is deprecated. Instead, use mapreduce.job.outputformat.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is
>> deprecated. Instead, use mapreduce.job.maps
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps
>> is deprecated. Instead, use mapreduce.job.cache.files.timestamps
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is
>> deprecated. Instead, use mapreduce.job.output.key.class
>>
>> 12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is
>> deprecated. Instead, use mapreduce.job.working.dir
>>
>> 12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application
>> application_1352478937343_0002 to ResourceManager at master/
>> 10.12.181.233:60400
>>
>> 12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job:
>> http://master:8088/proxy/application_1352478937343_0002/
>>
>> 12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running
>> in uber mode : false
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed
>> with state FAILED due to: Application application_1352478937343_0002 failed
>> 1 times due to Error launching appattempt_1352478937343_0002_000001. Got
>> exception: java.lang.reflect.UndeclaredThrowableException
>>
>>         at
>> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
>>
>>         at
>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
>>
>>         at
>> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>
>>         at java.lang.Thread.run(Thread.java:722)
>>
>> Caused by: com.google.protobuf.ServiceException:
>> java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For
>> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>>
>>         at
>> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
>>
>>         at $Proxy24.startContainer(Unknown Source)
>>
>>         at
>> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
>>
>>         ... 5 more
>>
>> Caused by: java.net.UnknownHostException: Yinghua For more details see:
>> http://wiki.apache.org/hadoop/UnknownHost
>>
>>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
>>
>>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
>>
>>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
>>
>>         at org.apache.hadoop.ipc.Client.call(Client.java:1068)
>>
>>         at
>> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
>>
>>         ... 7 more
>>
>> Caused by: java.net.UnknownHostException
>>
>>         ... 11 more
>>
>> . Failing the application.
>>
>> 12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
>>
>> Job Finished in 2.672 seconds
>>
>> java.io.FileNotFoundException: File does not exist:
>> hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
>>
>>         at
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
>>
>>         at
>> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
>>
>>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>>
>>         at
>> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>>
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
>>
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
>>
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>>
>>
>>
>>
>>
>> --
>> Regards,
>>
>> Yinghua
>>
>>
>>   NOTICE: This e-mail message and any attachments are confidential,
>> subject to copyright and may be privileged. Any unauthorized use, copying
>> or disclosure is prohibited. If you are not the intended recipient, please
>> delete and contact the sender immediately. Please consider the environment
>> before printing this e-mail. AVIS : le présent courriel et toute pièce
>> jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur
>> et peuvent être couverts par le secret professionnel. Toute utilisation,
>> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
>> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
>> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
>> courriel
>>
>
>
>
> --
> Regards,
>
> Yinghua
>



-- 
Regards,

Yinghua

Re: Erro running pi programm

Posted by yinghua hu <yi...@gmail.com>.
Hi, Andy

Thanks for suggestions!

I am running it on a four node cluster on EC2. All the services started
fine, Namenode, Datanode, ResourceManager, NodeManager and
JobHistoryServer. Each node can ssh to all the nodes without problem.

But problem appears when trying to run any job.




On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <An...@mpac.ca>wrote:

>   Yinghua,
>
>
>
> What mode are you running your hadoop in: Local/Pseud/Fully...?
>
>
>
> Your hostname is not recognised
>
>
>
> Your configuration setting seems to be wrong.
>
>
>
>
>
>
>
>
>
>
>
> Hi, all
>
>
>
> Could some help looking at this problem? I am setting up a four node
> cluster on EC2 and seems that the cluster is set up fine until I start
> testing.
>
>
>
> I have tried password-less ssh from each node to all the nodes and there
> is no problem connecting. Any advice will be greatly appreciated!
>
>
>
> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
> -libjars
> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>
> Number of Maps  = 16
>
> Samples per Map = 10000
>
> Wrote input for Map #0
>
> Wrote input for Map #1
>
> Wrote input for Map #2
>
> Wrote input for Map #3
>
> Wrote input for Map #4
>
> Wrote input for Map #5
>
> Wrote input for Map #6
>
> Wrote input for Map #7
>
> Wrote input for Map #8
>
> Wrote input for Map #9
>
> Wrote input for Map #10
>
> Wrote input for Map #11
>
> Wrote input for Map #12
>
> Wrote input for Map #13
>
> Wrote input for Map #14
>
> Wrote input for Map #15
>
> Starting Job
>
> 12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to process
> : 16
>
> 12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is
> deprecated. Instead, use mapreduce.job.classpath.files
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is
> deprecated. Instead, use mapreduce.job.cache.files
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.map.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.map.speculative
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is
> deprecated. Instead, use mapreduce.job.output.value.class
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is
> deprecated. Instead, use mapreduce.job.map.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name is deprecated.
> Instead, use mapreduce.job.name
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is
> deprecated. Instead, use mapreduce.job.reduce.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is
> deprecated. Instead, use mapreduce.job.inputformat.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is deprecated.
> Instead, use mapreduce.input.fileinputformat.inputdir
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class is
> deprecated. Instead, use mapreduce.job.outputformat.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps
> is deprecated. Instead, use mapreduce.job.cache.files.timestamps
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is
> deprecated. Instead, use mapreduce.job.output.key.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
>
> 12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application
> application_1352478937343_0002 to ResourceManager at master/
> 10.12.181.233:60400
>
> 12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job:
> http://master:8088/proxy/application_1352478937343_0002/
>
> 12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running
> in uber mode : false
>
> 12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed
> with state FAILED due to: Application application_1352478937343_0002 failed
> 1 times due to Error launching appattempt_1352478937343_0002_000001. Got
> exception: java.lang.reflect.UndeclaredThrowableException
>
>         at
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
>
>         at
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
>
>         at
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>
>         at java.lang.Thread.run(Thread.java:722)
>
> Caused by: com.google.protobuf.ServiceException:
> java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For
> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>
>         at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
>
>         at $Proxy24.startContainer(Unknown Source)
>
>         at
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
>
>         ... 5 more
>
> Caused by: java.net.UnknownHostException: Yinghua For more details see:
> http://wiki.apache.org/hadoop/UnknownHost
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
>
>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
>
>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1068)
>
>         at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
>
>         ... 7 more
>
> Caused by: java.net.UnknownHostException
>
>         ... 11 more
>
> . Failing the application.
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
>
> Job Finished in 2.672 seconds
>
> java.io.FileNotFoundException: File does not exist:
> hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
>
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:601)
>
>         at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>
>         at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
>
>         at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:601)
>
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>
>
>
>
>
> --
> Regards,
>
> Yinghua
>
>
>   NOTICE: This e-mail message and any attachments are confidential,
> subject to copyright and may be privileged. Any unauthorized use, copying
> or disclosure is prohibited. If you are not the intended recipient, please
> delete and contact the sender immediately. Please consider the environment
> before printing this e-mail. AVIS : le présent courriel et toute pièce
> jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur
> et peuvent être couverts par le secret professionnel. Toute utilisation,
> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
> courriel
>



-- 
Regards,

Yinghua

Re: Erro running pi programm

Posted by yinghua hu <yi...@gmail.com>.
Hi, Andy

Thanks for suggestions!

I am running it on a four node cluster on EC2. All the services started
fine, Namenode, Datanode, ResourceManager, NodeManager and
JobHistoryServer. Each node can ssh to all the nodes without problem.

But problem appears when trying to run any job.




On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <An...@mpac.ca>wrote:

>   Yinghua,
>
>
>
> What mode are you running your hadoop in: Local/Pseud/Fully...?
>
>
>
> Your hostname is not recognised
>
>
>
> Your configuration setting seems to be wrong.
>
>
>
>
>
>
>
>
>
>
>
> Hi, all
>
>
>
> Could some help looking at this problem? I am setting up a four node
> cluster on EC2 and seems that the cluster is set up fine until I start
> testing.
>
>
>
> I have tried password-less ssh from each node to all the nodes and there
> is no problem connecting. Any advice will be greatly appreciated!
>
>
>
> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
> -libjars
> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>
> Number of Maps  = 16
>
> Samples per Map = 10000
>
> Wrote input for Map #0
>
> Wrote input for Map #1
>
> Wrote input for Map #2
>
> Wrote input for Map #3
>
> Wrote input for Map #4
>
> Wrote input for Map #5
>
> Wrote input for Map #6
>
> Wrote input for Map #7
>
> Wrote input for Map #8
>
> Wrote input for Map #9
>
> Wrote input for Map #10
>
> Wrote input for Map #11
>
> Wrote input for Map #12
>
> Wrote input for Map #13
>
> Wrote input for Map #14
>
> Wrote input for Map #15
>
> Starting Job
>
> 12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to process
> : 16
>
> 12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is
> deprecated. Instead, use mapreduce.job.classpath.files
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is
> deprecated. Instead, use mapreduce.job.cache.files
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.map.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.map.speculative
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is
> deprecated. Instead, use mapreduce.job.output.value.class
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is
> deprecated. Instead, use mapreduce.job.map.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name is deprecated.
> Instead, use mapreduce.job.name
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is
> deprecated. Instead, use mapreduce.job.reduce.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is
> deprecated. Instead, use mapreduce.job.inputformat.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is deprecated.
> Instead, use mapreduce.input.fileinputformat.inputdir
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class is
> deprecated. Instead, use mapreduce.job.outputformat.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps
> is deprecated. Instead, use mapreduce.job.cache.files.timestamps
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is
> deprecated. Instead, use mapreduce.job.output.key.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
>
> 12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application
> application_1352478937343_0002 to ResourceManager at master/
> 10.12.181.233:60400
>
> 12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job:
> http://master:8088/proxy/application_1352478937343_0002/
>
> 12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running
> in uber mode : false
>
> 12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed
> with state FAILED due to: Application application_1352478937343_0002 failed
> 1 times due to Error launching appattempt_1352478937343_0002_000001. Got
> exception: java.lang.reflect.UndeclaredThrowableException
>
>         at
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
>
>         at
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
>
>         at
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>
>         at java.lang.Thread.run(Thread.java:722)
>
> Caused by: com.google.protobuf.ServiceException:
> java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For
> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>
>         at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
>
>         at $Proxy24.startContainer(Unknown Source)
>
>         at
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
>
>         ... 5 more
>
> Caused by: java.net.UnknownHostException: Yinghua For more details see:
> http://wiki.apache.org/hadoop/UnknownHost
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
>
>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
>
>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1068)
>
>         at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
>
>         ... 7 more
>
> Caused by: java.net.UnknownHostException
>
>         ... 11 more
>
> . Failing the application.
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
>
> Job Finished in 2.672 seconds
>
> java.io.FileNotFoundException: File does not exist:
> hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
>
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:601)
>
>         at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>
>         at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
>
>         at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:601)
>
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>
>
>
>
>
> --
> Regards,
>
> Yinghua
>
>
>   NOTICE: This e-mail message and any attachments are confidential,
> subject to copyright and may be privileged. Any unauthorized use, copying
> or disclosure is prohibited. If you are not the intended recipient, please
> delete and contact the sender immediately. Please consider the environment
> before printing this e-mail. AVIS : le présent courriel et toute pièce
> jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur
> et peuvent être couverts par le secret professionnel. Toute utilisation,
> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
> courriel
>



-- 
Regards,

Yinghua

Re: Erro running pi programm

Posted by yinghua hu <yi...@gmail.com>.
Hi, Andy

Thanks for suggestions!

I am running it on a four node cluster on EC2. All the services started
fine, Namenode, Datanode, ResourceManager, NodeManager and
JobHistoryServer. Each node can ssh to all the nodes without problem.

But problem appears when trying to run any job.




On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <An...@mpac.ca>wrote:

>   Yinghua,
>
>
>
> What mode are you running your hadoop in: Local/Pseud/Fully...?
>
>
>
> Your hostname is not recognised
>
>
>
> Your configuration setting seems to be wrong.
>
>
>
>
>
>
>
>
>
>
>
> Hi, all
>
>
>
> Could some help looking at this problem? I am setting up a four node
> cluster on EC2 and seems that the cluster is set up fine until I start
> testing.
>
>
>
> I have tried password-less ssh from each node to all the nodes and there
> is no problem connecting. Any advice will be greatly appreciated!
>
>
>
> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
> -libjars
> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>
> Number of Maps  = 16
>
> Samples per Map = 10000
>
> Wrote input for Map #0
>
> Wrote input for Map #1
>
> Wrote input for Map #2
>
> Wrote input for Map #3
>
> Wrote input for Map #4
>
> Wrote input for Map #5
>
> Wrote input for Map #6
>
> Wrote input for Map #7
>
> Wrote input for Map #8
>
> Wrote input for Map #9
>
> Wrote input for Map #10
>
> Wrote input for Map #11
>
> Wrote input for Map #12
>
> Wrote input for Map #13
>
> Wrote input for Map #14
>
> Wrote input for Map #15
>
> Starting Job
>
> 12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to process
> : 16
>
> 12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is
> deprecated. Instead, use mapreduce.job.classpath.files
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is
> deprecated. Instead, use mapreduce.job.cache.files
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.map.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.map.speculative
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is
> deprecated. Instead, use mapreduce.job.output.value.class
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is
> deprecated. Instead, use mapreduce.job.map.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name is deprecated.
> Instead, use mapreduce.job.name
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is
> deprecated. Instead, use mapreduce.job.reduce.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is
> deprecated. Instead, use mapreduce.job.inputformat.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is deprecated.
> Instead, use mapreduce.input.fileinputformat.inputdir
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class is
> deprecated. Instead, use mapreduce.job.outputformat.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps
> is deprecated. Instead, use mapreduce.job.cache.files.timestamps
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is
> deprecated. Instead, use mapreduce.job.output.key.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
>
> 12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application
> application_1352478937343_0002 to ResourceManager at master/
> 10.12.181.233:60400
>
> 12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job:
> http://master:8088/proxy/application_1352478937343_0002/
>
> 12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running
> in uber mode : false
>
> 12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed
> with state FAILED due to: Application application_1352478937343_0002 failed
> 1 times due to Error launching appattempt_1352478937343_0002_000001. Got
> exception: java.lang.reflect.UndeclaredThrowableException
>
>         at
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
>
>         at
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
>
>         at
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>
>         at java.lang.Thread.run(Thread.java:722)
>
> Caused by: com.google.protobuf.ServiceException:
> java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For
> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>
>         at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
>
>         at $Proxy24.startContainer(Unknown Source)
>
>         at
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
>
>         ... 5 more
>
> Caused by: java.net.UnknownHostException: Yinghua For more details see:
> http://wiki.apache.org/hadoop/UnknownHost
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
>
>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
>
>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1068)
>
>         at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
>
>         ... 7 more
>
> Caused by: java.net.UnknownHostException
>
>         ... 11 more
>
> . Failing the application.
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
>
> Job Finished in 2.672 seconds
>
> java.io.FileNotFoundException: File does not exist:
> hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
>
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:601)
>
>         at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>
>         at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
>
>         at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:601)
>
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>
>
>
>
>
> --
> Regards,
>
> Yinghua
>
>
>   NOTICE: This e-mail message and any attachments are confidential,
> subject to copyright and may be privileged. Any unauthorized use, copying
> or disclosure is prohibited. If you are not the intended recipient, please
> delete and contact the sender immediately. Please consider the environment
> before printing this e-mail. AVIS : le présent courriel et toute pièce
> jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur
> et peuvent être couverts par le secret professionnel. Toute utilisation,
> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
> courriel
>



-- 
Regards,

Yinghua

Re: Erro running pi programm

Posted by yinghua hu <yi...@gmail.com>.
Hi, Andy

Thanks for suggestions!

I am running it on a four node cluster on EC2. All the services started
fine, Namenode, Datanode, ResourceManager, NodeManager and
JobHistoryServer. Each node can ssh to all the nodes without problem.

But problem appears when trying to run any job.




On Fri, Nov 9, 2012 at 9:36 AM, Kartashov, Andy <An...@mpac.ca>wrote:

>   Yinghua,
>
>
>
> What mode are you running your hadoop in: Local/Pseud/Fully...?
>
>
>
> Your hostname is not recognised
>
>
>
> Your configuration setting seems to be wrong.
>
>
>
>
>
>
>
>
>
>
>
> Hi, all
>
>
>
> Could some help looking at this problem? I am setting up a four node
> cluster on EC2 and seems that the cluster is set up fine until I start
> testing.
>
>
>
> I have tried password-less ssh from each node to all the nodes and there
> is no problem connecting. Any advice will be greatly appreciated!
>
>
>
> [hduser@ip-XX-XX-XXX-XXX hadoop]$ bin/hadoop jar
> share/hadoop/mapreduce/hadoop-mapreduce-examples-0.23.4.jar pi -
> Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
> -libjars
> share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-0.23.4.jar 16 10000
>
> Number of Maps  = 16
>
> Samples per Map = 10000
>
> Wrote input for Map #0
>
> Wrote input for Map #1
>
> Wrote input for Map #2
>
> Wrote input for Map #3
>
> Wrote input for Map #4
>
> Wrote input for Map #5
>
> Wrote input for Map #6
>
> Wrote input for Map #7
>
> Wrote input for Map #8
>
> Wrote input for Map #9
>
> Wrote input for Map #10
>
> Wrote input for Map #11
>
> Wrote input for Map #12
>
> Wrote input for Map #13
>
> Wrote input for Map #14
>
> Wrote input for Map #15
>
> Starting Job
>
> 12/11/09 12:02:59 INFO input.FileInputFormat: Total input paths to process
> : 16
>
> 12/11/09 12:02:59 INFO mapreduce.JobSubmitter: number of splits:16
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.classpath.files is
> deprecated. Instead, use mapreduce.job.classpath.files
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files is
> deprecated. Instead, use mapreduce.job.cache.files
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.map.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.map.speculative
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.value.class is
> deprecated. Instead, use mapreduce.job.output.value.class
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
>
> 12/11/09 12:02:59 WARN conf.Configuration:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.map.class is
> deprecated. Instead, use mapreduce.job.map.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.job.name is deprecated.
> Instead, use mapreduce.job.name
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.reduce.class is
> deprecated. Instead, use mapreduce.job.reduce.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.inputformat.class is
> deprecated. Instead, use mapreduce.job.inputformat.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.input.dir is deprecated.
> Instead, use mapreduce.input.fileinputformat.inputdir
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapreduce.outputformat.class is
> deprecated. Instead, use mapreduce.job.outputformat.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.map.tasks is deprecated.
> Instead, use mapreduce.job.maps
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.cache.files.timestamps
> is deprecated. Instead, use mapreduce.job.cache.files.timestamps
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.output.key.class is
> deprecated. Instead, use mapreduce.job.output.key.class
>
> 12/11/09 12:02:59 WARN conf.Configuration: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
>
> 12/11/09 12:03:00 INFO mapred.ResourceMgrDelegate: Submitted application
> application_1352478937343_0002 to ResourceManager at master/
> 10.12.181.233:60400
>
> 12/11/09 12:03:00 INFO mapreduce.Job: The url to track the job:
> http://master:8088/proxy/application_1352478937343_0002/
>
> 12/11/09 12:03:00 INFO mapreduce.Job: Running job: job_1352478937343_0002
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 running
> in uber mode : false
>
> 12/11/09 12:03:01 INFO mapreduce.Job:  map 0% reduce 0%
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Job job_1352478937343_0002 failed
> with state FAILED due to: Application application_1352478937343_0002 failed
> 1 times due to Error launching appattempt_1352478937343_0002_000001. Got
> exception: java.lang.reflect.UndeclaredThrowableException
>
>         at
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:111)
>
>         at
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:115)
>
>         at
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:258)
>
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>
>         at java.lang.Thread.run(Thread.java:722)
>
> Caused by: com.google.protobuf.ServiceException:
> java.net.UnknownHostException: Yinghua java.net.UnknownHostException; For
> more details see:  http://wiki.apache.org/hadoop/UnknownHost
>
>         at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:144)
>
>         at $Proxy24.startContainer(Unknown Source)
>
>         at
> org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagerPBClientImpl.startContainer(ContainerManagerPBClientImpl.java:104)
>
>         ... 5 more
>
> Caused by: java.net.UnknownHostException: Yinghua For more details see:
> http://wiki.apache.org/hadoop/UnknownHost
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:713)
>
>         at org.apache.hadoop.ipc.Client$Connection.<init>(Client.java:236)
>
>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1188)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1068)
>
>         at
> org.apache.hadoop.yarn.ipc.ProtoOverHadoopRpcEngine$Invoker.invoke(ProtoOverHadoopRpcEngine.java:141)
>
>         ... 7 more
>
> Caused by: java.net.UnknownHostException
>
>         ... 11 more
>
> . Failing the application.
>
> 12/11/09 12:03:01 INFO mapreduce.Job: Counters: 0
>
> Job Finished in 2.672 seconds
>
> java.io.FileNotFoundException: File does not exist:
> hdfs://master:9000/user/hduser/QuasiMonteCarlo_TMP_3_141592654/out/reduce-out
>
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:738)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1685)
>
>         at
> org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1709)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:351)
>
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:69)
>
>         at
> org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:360)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:601)
>
>         at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
>
>         at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
>
>         at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:68)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:601)
>
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
>
>
>
>
>
> --
> Regards,
>
> Yinghua
>
>
>   NOTICE: This e-mail message and any attachments are confidential,
> subject to copyright and may be privileged. Any unauthorized use, copying
> or disclosure is prohibited. If you are not the intended recipient, please
> delete and contact the sender immediately. Please consider the environment
> before printing this e-mail. AVIS : le présent courriel et toute pièce
> jointe qui l'accompagne sont confidentiels, protégés par le droit d'auteur
> et peuvent être couverts par le secret professionnel. Toute utilisation,
> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
> courriel
>



-- 
Regards,

Yinghua