You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by YIMEN YIMGA Gael <ga...@sgcib.com> on 2014/09/15 15:18:36 UTC

Explanation according to the output of a successful execution

Hello Dear Hadoopers,

Just to let you know that, finally I succeded fixing my issue this morning.
Now, I would like to have more explanation according to the output of the running.
Here is the output:

===

-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the sam.
14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to process : 17
14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
14/09/15 15:01:16 INFO mapred.JobClient: Job complete: job_201409101141_0008
14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=2796748800
14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=3333201920
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=11883048960
14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
PASSE !!!
-bash-4.1$ bin/hadoop jar WordCount.jar

=====

Thanks in advance for your help.

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Thursday 11 September 2014 12:13
To: 'user@hadoop.apache.org'
Subject: RE: Error when executing a WordCount Program

Hello dear all,

Regarding the issue below, I succeded to fix the following warning

14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).

But the main error is still persisting.

14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input

I have even put in comment the following block in my code, but the error is still the same.

// config.set("fs.default.name", "hdfs://latdevweb02:9000/");
            // config.set("mapred.job.tracker", "latdevweb02:9001");

Please could you advise ????

Standing by...

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Wednesday 10 September 2014 15:10
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Error when executing a WordCount Program

Hello Hadoopers,

Here is the error, I'm facing when running WordCount example program written by myself.
Kind find attached the file of my WordCount program.
Below the error.

===========================================================================================================================================
-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001
14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
        at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
        at fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
-bash-4.1$
===========================================================================================================================================

Thanks in advance for your help.

Warm regards
GYY
*************************************************************************
This message and any attachments (the "message") are confidential, intended solely for the addressee(s), and may contain legally privileged information.
Any unauthorised use or dissemination is prohibited. E-mails are susceptible to alteration.   
Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or
falsified.
Please visit http://swapdisclosure.sgcib.com for important information with respect to derivative products.
                              ************
Ce message et toutes les pieces jointes (ci-apres le "message") sont confidentiels et susceptibles de contenir des informations couvertes 
par le secret professionnel. 
Ce message est etabli a l'intention exclusive de ses destinataires. Toute utilisation ou diffusion non autorisee est interdite.
Tout message electronique est susceptible d'alteration. 
La SOCIETE GENERALE et ses filiales declinent toute responsabilite au titre de ce message s'il a ete altere, deforme ou falsifie.
Veuillez consulter le site http://swapdisclosure.sgcib.com afin de recueillir d'importantes informations sur les produits derives.
*************************************************************************

RE: Explanation according to the output of a successful execution

Posted by YIMEN YIMGA Gael <ga...@sgcib.com>.
Hi Shabab,

Here is the explanation of the error fixing


-          First of all, the folder OUTPUT must not exist because, it will be created during MAP and REDUCE processing.

-          Secondly, as I set the INPUT folder using the following instruction in my java program (FileInputFormat.setInputPaths(conf, new Path("/home/hadoop/hadoop/input"));), I  should leave <value> empty in the parameter file conf/core-site.xml

After that, you’ll fix the following error

==
14/09/11 12:06:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)

==

Now I would like that someone who has a lot of experience with Hadoop tries to explain me the output I have after running succefully my program.

Standing by…

Thanks in advance.

GYY

From: Shahab Yunus [mailto:shahab.yunus@gmail.com]
Sent: Monday 15 September 2014 15:20
To: user@hadoop.apache.org
Subject: Re: Explanation according to the output of a successful execution

How did you fix it? And what is your question now?

Regards,
Shahab

On Mon, Sep 15, 2014 at 9:18 AM, YIMEN YIMGA Gael <ga...@sgcib.com>> wrote:
Hello Dear Hadoopers,

Just to let you know that, finally I succeded fixing my issue this morning.
Now, I would like to have more explanation according to the output of the running.
Here is the output:

===

-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the sam.
14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to process : 17
14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
14/09/15 15:01:16 INFO mapred.JobClient: Job complete: job_201409101141_0008
14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=2796748800
14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=3333201920
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=11883048960
14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
PASSE !!!
-bash-4.1$ bin/hadoop jar WordCount.jar

=====

Thanks in advance for your help.

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Thursday 11 September 2014 12:13
To: 'user@hadoop.apache.org<ma...@hadoop.apache.org>'
Subject: RE: Error when executing a WordCount Program

Hello dear all,

Regarding the issue below, I succeded to fix the following warning

14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).

But the main error is still persisting.

14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input

I have even put in comment the following block in my code, but the error is still the same.

// config.set("fs.default.name<http://fs.default.name>", "hdfs://latdevweb02:9000/");
            // config.set("mapred.job.tracker", "latdevweb02:9001");

Please could you advise ????

Standing by…

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Wednesday 10 September 2014 15:10
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Error when executing a WordCount Program

Hello Hadoopers,

Here is the error, I’m facing when running WordCount example program written by myself.
Kind find attached the file of my WordCount program.
Below the error.

===========================================================================================================================================
-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001
14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
        at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
        at fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
-bash-4.1$
===========================================================================================================================================

Thanks in advance for your help.

Warm regards
GYY

*************************************************************************
This message and any attachments (the "message") are confidential, intended solely for the addressee(s), and may contain legally privileged information.
Any unauthorised use or dissemination is prohibited. E-mails are susceptible to alteration.
Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or
falsified.
Please visit http://swapdisclosure.sgcib.com for important information with respect to derivative products.
                              ************
Ce message et toutes les pieces jointes (ci-apres le "message") sont confidentiels et susceptibles de contenir des informations couvertes
par le secret professionnel.
Ce message est etabli a l'intention exclusive de ses destinataires. Toute utilisation ou diffusion non autorisee est interdite.
Tout message electronique est susceptible d'alteration.
La SOCIETE GENERALE et ses filiales declinent toute responsabilite au titre de ce message s'il a ete altere, deforme ou falsifie.
Veuillez consulter le site http://swapdisclosure.sgcib.com afin de recueillir d'importantes informations sur les produits derives.
*************************************************************************


RE: Explanation according to the output of a successful execution

Posted by YIMEN YIMGA Gael <ga...@sgcib.com>.
Hi Shabab,

Here is the explanation of the error fixing


-          First of all, the folder OUTPUT must not exist because, it will be created during MAP and REDUCE processing.

-          Secondly, as I set the INPUT folder using the following instruction in my java program (FileInputFormat.setInputPaths(conf, new Path("/home/hadoop/hadoop/input"));), I  should leave <value> empty in the parameter file conf/core-site.xml

After that, you’ll fix the following error

==
14/09/11 12:06:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)

==

Now I would like that someone who has a lot of experience with Hadoop tries to explain me the output I have after running succefully my program.

Standing by…

Thanks in advance.

GYY

From: Shahab Yunus [mailto:shahab.yunus@gmail.com]
Sent: Monday 15 September 2014 15:20
To: user@hadoop.apache.org
Subject: Re: Explanation according to the output of a successful execution

How did you fix it? And what is your question now?

Regards,
Shahab

On Mon, Sep 15, 2014 at 9:18 AM, YIMEN YIMGA Gael <ga...@sgcib.com>> wrote:
Hello Dear Hadoopers,

Just to let you know that, finally I succeded fixing my issue this morning.
Now, I would like to have more explanation according to the output of the running.
Here is the output:

===

-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the sam.
14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to process : 17
14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
14/09/15 15:01:16 INFO mapred.JobClient: Job complete: job_201409101141_0008
14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=2796748800
14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=3333201920
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=11883048960
14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
PASSE !!!
-bash-4.1$ bin/hadoop jar WordCount.jar

=====

Thanks in advance for your help.

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Thursday 11 September 2014 12:13
To: 'user@hadoop.apache.org<ma...@hadoop.apache.org>'
Subject: RE: Error when executing a WordCount Program

Hello dear all,

Regarding the issue below, I succeded to fix the following warning

14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).

But the main error is still persisting.

14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input

I have even put in comment the following block in my code, but the error is still the same.

// config.set("fs.default.name<http://fs.default.name>", "hdfs://latdevweb02:9000/");
            // config.set("mapred.job.tracker", "latdevweb02:9001");

Please could you advise ????

Standing by…

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Wednesday 10 September 2014 15:10
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Error when executing a WordCount Program

Hello Hadoopers,

Here is the error, I’m facing when running WordCount example program written by myself.
Kind find attached the file of my WordCount program.
Below the error.

===========================================================================================================================================
-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001
14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
        at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
        at fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
-bash-4.1$
===========================================================================================================================================

Thanks in advance for your help.

Warm regards
GYY

*************************************************************************
This message and any attachments (the "message") are confidential, intended solely for the addressee(s), and may contain legally privileged information.
Any unauthorised use or dissemination is prohibited. E-mails are susceptible to alteration.
Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or
falsified.
Please visit http://swapdisclosure.sgcib.com for important information with respect to derivative products.
                              ************
Ce message et toutes les pieces jointes (ci-apres le "message") sont confidentiels et susceptibles de contenir des informations couvertes
par le secret professionnel.
Ce message est etabli a l'intention exclusive de ses destinataires. Toute utilisation ou diffusion non autorisee est interdite.
Tout message electronique est susceptible d'alteration.
La SOCIETE GENERALE et ses filiales declinent toute responsabilite au titre de ce message s'il a ete altere, deforme ou falsifie.
Veuillez consulter le site http://swapdisclosure.sgcib.com afin de recueillir d'importantes informations sur les produits derives.
*************************************************************************


RE: Explanation according to the output of a successful execution

Posted by YIMEN YIMGA Gael <ga...@sgcib.com>.
Hi Shabab,

Here is the explanation of the error fixing


-          First of all, the folder OUTPUT must not exist because, it will be created during MAP and REDUCE processing.

-          Secondly, as I set the INPUT folder using the following instruction in my java program (FileInputFormat.setInputPaths(conf, new Path("/home/hadoop/hadoop/input"));), I  should leave <value> empty in the parameter file conf/core-site.xml

After that, you’ll fix the following error

==
14/09/11 12:06:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)

==

Now I would like that someone who has a lot of experience with Hadoop tries to explain me the output I have after running succefully my program.

Standing by…

Thanks in advance.

GYY

From: Shahab Yunus [mailto:shahab.yunus@gmail.com]
Sent: Monday 15 September 2014 15:20
To: user@hadoop.apache.org
Subject: Re: Explanation according to the output of a successful execution

How did you fix it? And what is your question now?

Regards,
Shahab

On Mon, Sep 15, 2014 at 9:18 AM, YIMEN YIMGA Gael <ga...@sgcib.com>> wrote:
Hello Dear Hadoopers,

Just to let you know that, finally I succeded fixing my issue this morning.
Now, I would like to have more explanation according to the output of the running.
Here is the output:

===

-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the sam.
14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to process : 17
14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
14/09/15 15:01:16 INFO mapred.JobClient: Job complete: job_201409101141_0008
14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=2796748800
14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=3333201920
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=11883048960
14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
PASSE !!!
-bash-4.1$ bin/hadoop jar WordCount.jar

=====

Thanks in advance for your help.

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Thursday 11 September 2014 12:13
To: 'user@hadoop.apache.org<ma...@hadoop.apache.org>'
Subject: RE: Error when executing a WordCount Program

Hello dear all,

Regarding the issue below, I succeded to fix the following warning

14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).

But the main error is still persisting.

14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input

I have even put in comment the following block in my code, but the error is still the same.

// config.set("fs.default.name<http://fs.default.name>", "hdfs://latdevweb02:9000/");
            // config.set("mapred.job.tracker", "latdevweb02:9001");

Please could you advise ????

Standing by…

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Wednesday 10 September 2014 15:10
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Error when executing a WordCount Program

Hello Hadoopers,

Here is the error, I’m facing when running WordCount example program written by myself.
Kind find attached the file of my WordCount program.
Below the error.

===========================================================================================================================================
-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001
14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
        at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
        at fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
-bash-4.1$
===========================================================================================================================================

Thanks in advance for your help.

Warm regards
GYY

*************************************************************************
This message and any attachments (the "message") are confidential, intended solely for the addressee(s), and may contain legally privileged information.
Any unauthorised use or dissemination is prohibited. E-mails are susceptible to alteration.
Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or
falsified.
Please visit http://swapdisclosure.sgcib.com for important information with respect to derivative products.
                              ************
Ce message et toutes les pieces jointes (ci-apres le "message") sont confidentiels et susceptibles de contenir des informations couvertes
par le secret professionnel.
Ce message est etabli a l'intention exclusive de ses destinataires. Toute utilisation ou diffusion non autorisee est interdite.
Tout message electronique est susceptible d'alteration.
La SOCIETE GENERALE et ses filiales declinent toute responsabilite au titre de ce message s'il a ete altere, deforme ou falsifie.
Veuillez consulter le site http://swapdisclosure.sgcib.com afin de recueillir d'importantes informations sur les produits derives.
*************************************************************************


RE: Explanation according to the output of a successful execution

Posted by YIMEN YIMGA Gael <ga...@sgcib.com>.
Hi Shabab,

Here is the explanation of the error fixing


-          First of all, the folder OUTPUT must not exist because, it will be created during MAP and REDUCE processing.

-          Secondly, as I set the INPUT folder using the following instruction in my java program (FileInputFormat.setInputPaths(conf, new Path("/home/hadoop/hadoop/input"));), I  should leave <value> empty in the parameter file conf/core-site.xml

After that, you’ll fix the following error

==
14/09/11 12:06:10 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)

==

Now I would like that someone who has a lot of experience with Hadoop tries to explain me the output I have after running succefully my program.

Standing by…

Thanks in advance.

GYY

From: Shahab Yunus [mailto:shahab.yunus@gmail.com]
Sent: Monday 15 September 2014 15:20
To: user@hadoop.apache.org
Subject: Re: Explanation according to the output of a successful execution

How did you fix it? And what is your question now?

Regards,
Shahab

On Mon, Sep 15, 2014 at 9:18 AM, YIMEN YIMGA Gael <ga...@sgcib.com>> wrote:
Hello Dear Hadoopers,

Just to let you know that, finally I succeded fixing my issue this morning.
Now, I would like to have more explanation according to the output of the running.
Here is the output:

===

-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the sam.
14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to process : 17
14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
14/09/15 15:01:16 INFO mapred.JobClient: Job complete: job_201409101141_0008
14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage (bytes)=2796748800
14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=3333201920
14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=11883048960
14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
PASSE !!!
-bash-4.1$ bin/hadoop jar WordCount.jar

=====

Thanks in advance for your help.

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Thursday 11 September 2014 12:13
To: 'user@hadoop.apache.org<ma...@hadoop.apache.org>'
Subject: RE: Error when executing a WordCount Program

Hello dear all,

Regarding the issue below, I succeded to fix the following warning

14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).

But the main error is still persisting.

14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input

I have even put in comment the following block in my code, but the error is still the same.

// config.set("fs.default.name<http://fs.default.name>", "hdfs://latdevweb02:9000/");
            // config.set("mapred.job.tracker", "latdevweb02:9001");

Please could you advise ????

Standing by…

GYY

From: YIMEN YIMGA Gael ItecCsySat
Sent: Wednesday 10 September 2014 15:10
To: user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Error when executing a WordCount Program

Hello Hadoopers,

Here is the error, I’m facing when running WordCount example program written by myself.
Kind find attached the file of my WordCount program.
Below the error.

===========================================================================================================================================
-bash-4.1$ bin/hadoop jar WordCount.jar
Entr?e dans le programme MAIN !!!
14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001
14/09/10 15:00:24 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)
        at org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)
        at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)
        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)
        at fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
-bash-4.1$
===========================================================================================================================================

Thanks in advance for your help.

Warm regards
GYY

*************************************************************************
This message and any attachments (the "message") are confidential, intended solely for the addressee(s), and may contain legally privileged information.
Any unauthorised use or dissemination is prohibited. E-mails are susceptible to alteration.
Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or
falsified.
Please visit http://swapdisclosure.sgcib.com for important information with respect to derivative products.
                              ************
Ce message et toutes les pieces jointes (ci-apres le "message") sont confidentiels et susceptibles de contenir des informations couvertes
par le secret professionnel.
Ce message est etabli a l'intention exclusive de ses destinataires. Toute utilisation ou diffusion non autorisee est interdite.
Tout message electronique est susceptible d'alteration.
La SOCIETE GENERALE et ses filiales declinent toute responsabilite au titre de ce message s'il a ete altere, deforme ou falsifie.
Veuillez consulter le site http://swapdisclosure.sgcib.com afin de recueillir d'importantes informations sur les produits derives.
*************************************************************************


Re: Explanation according to the output of a successful execution

Posted by Shahab Yunus <sh...@gmail.com>.
How did you fix it? And what is your question now?

Regards,
Shahab

On Mon, Sep 15, 2014 at 9:18 AM, YIMEN YIMGA Gael <
gael.yimen-yimga@sgcib.com> wrote:

> Hello Dear Hadoopers,
>
>
>
> Just to let you know that, finally I succeded fixing my issue this morning.
>
> Now, I would like to have more explanation according to the output of the
> running.
>
> Here is the output:
>
>
>
> ===
>
>
>
> -bash-4.1$ bin/hadoop jar WordCount.jar
>
> Entr?e dans le programme MAIN !!!
>
> 14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the sam.
>
> 14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
>
> 14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
>
> 14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to
> process : 17
>
> 14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
>
> 14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
>
> 14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
>
> 14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
>
> 14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
>
> 14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
>
> 14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
>
> 14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
>
> 14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
>
> 14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
>
> 14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
>
> 14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
>
> 14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
>
> 14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
>
> 14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
>
> 14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
>
> 14/09/15 15:01:16 INFO mapred.JobClient: Job complete:
> job_201409101141_0008
>
> 14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized
> bytes=31283
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=2796748800
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=3333201920
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=11883048960
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
>
> PASSE !!!
>
> -bash-4.1$ bin/hadoop jar WordCount.jar
>
>
>
> =====
>
>
>
> Thanks in advance for your help.
>
>
>
> GYY
>
>
>
> *From:* YIMEN YIMGA Gael ItecCsySat
> *Sent:* Thursday 11 September 2014 12:13
> *To:* 'user@hadoop.apache.org'
> *Subject:* RE: Error when executing a WordCount Program
>
>
>
> Hello dear all,
>
>
>
> Regarding the issue below, I succeded to fix the following warning
>
>
>
> *14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).*
>
>
>
> But the main error is still persisting.
>
>
>
> *14/09/10 15:00:24 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
>
>
> I have even put in comment the following block in my code, but the error
> is still the same.
>
>
>
> // config.set("fs.default.name", "hdfs://latdevweb02:9000/");
>
>             // config.set("mapred.job.tracker", "latdevweb02:9001");
>
>
>
> Please could you advise ????
>
>
>
> Standing by…
>
>
>
> GYY
>
>
>
> *From:* YIMEN YIMGA Gael ItecCsySat
> *Sent:* Wednesday 10 September 2014 15:10
> *To:* user@hadoop.apache.org
> *Subject:* Error when executing a WordCount Program
>
>
>
> Hello Hadoopers,
>
>
>
> Here is the error, I’m facing when running WordCount example program
> written by myself.
>
> Kind find attached the file of my WordCount program.
>
> Below the error.
>
>
>
>
> ===========================================================================================================================================
>
> *-bash-4.1$ bin/hadoop jar WordCount.jar*
>
> *Entr?e dans le programme MAIN !!!*
>
> *14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.*
>
> *14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).*
>
> *14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library*
>
> *14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not
> loaded*
>
> *14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001*
>
> *14/09/10 15:00:24 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
> *org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
> *        at
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)*
>
> *        at
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)*
>
> *        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)*
>
> *        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)*
>
> *        at java.security.AccessController.doPrivileged(Native Method)*
>
> *        at javax.security.auth.Subject.doAs(Subject.java:415)*
>
> *        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)*
>
> *        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)*
>
> *        at
> fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)*
>
> *        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
>
> *        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)*
>
> *        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
>
> *        at java.lang.reflect.Method.invoke(Method.java:601)*
>
> *        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)*
>
> *-bash-4.1$*
>
>
> ===========================================================================================================================================
>
>
>
> Thanks in advance for your help.
>
>
>
> Warm regards
>
> GYY
>
> *************************************************************************
> This message and any attachments (the "message") are confidential,
> intended solely for the addressee(s), and may contain legally privileged
> information.
> Any unauthorised use or dissemination is prohibited. E-mails are
> susceptible to alteration.
> Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall
> be liable for the message if altered, changed or
> falsified.
> Please visit http://swapdisclosure.sgcib.com for important information
> with respect to derivative products.
>                               ************
> Ce message et toutes les pieces jointes (ci-apres le "message") sont
> confidentiels et susceptibles de contenir des informations couvertes
> par le secret professionnel.
> Ce message est etabli a l'intention exclusive de ses destinataires. Toute
> utilisation ou diffusion non autorisee est interdite.
> Tout message electronique est susceptible d'alteration.
> La SOCIETE GENERALE et ses filiales declinent toute responsabilite au
> titre de ce message s'il a ete altere, deforme ou falsifie.
> Veuillez consulter le site http://swapdisclosure.sgcib.com afin de
> recueillir d'importantes informations sur les produits derives.
> *************************************************************************
>

Re: Explanation according to the output of a successful execution

Posted by Shahab Yunus <sh...@gmail.com>.
How did you fix it? And what is your question now?

Regards,
Shahab

On Mon, Sep 15, 2014 at 9:18 AM, YIMEN YIMGA Gael <
gael.yimen-yimga@sgcib.com> wrote:

> Hello Dear Hadoopers,
>
>
>
> Just to let you know that, finally I succeded fixing my issue this morning.
>
> Now, I would like to have more explanation according to the output of the
> running.
>
> Here is the output:
>
>
>
> ===
>
>
>
> -bash-4.1$ bin/hadoop jar WordCount.jar
>
> Entr?e dans le programme MAIN !!!
>
> 14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the sam.
>
> 14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
>
> 14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
>
> 14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to
> process : 17
>
> 14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
>
> 14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
>
> 14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
>
> 14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
>
> 14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
>
> 14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
>
> 14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
>
> 14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
>
> 14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
>
> 14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
>
> 14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
>
> 14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
>
> 14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
>
> 14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
>
> 14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
>
> 14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
>
> 14/09/15 15:01:16 INFO mapred.JobClient: Job complete:
> job_201409101141_0008
>
> 14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized
> bytes=31283
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=2796748800
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=3333201920
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=11883048960
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
>
> PASSE !!!
>
> -bash-4.1$ bin/hadoop jar WordCount.jar
>
>
>
> =====
>
>
>
> Thanks in advance for your help.
>
>
>
> GYY
>
>
>
> *From:* YIMEN YIMGA Gael ItecCsySat
> *Sent:* Thursday 11 September 2014 12:13
> *To:* 'user@hadoop.apache.org'
> *Subject:* RE: Error when executing a WordCount Program
>
>
>
> Hello dear all,
>
>
>
> Regarding the issue below, I succeded to fix the following warning
>
>
>
> *14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).*
>
>
>
> But the main error is still persisting.
>
>
>
> *14/09/10 15:00:24 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
>
>
> I have even put in comment the following block in my code, but the error
> is still the same.
>
>
>
> // config.set("fs.default.name", "hdfs://latdevweb02:9000/");
>
>             // config.set("mapred.job.tracker", "latdevweb02:9001");
>
>
>
> Please could you advise ????
>
>
>
> Standing by…
>
>
>
> GYY
>
>
>
> *From:* YIMEN YIMGA Gael ItecCsySat
> *Sent:* Wednesday 10 September 2014 15:10
> *To:* user@hadoop.apache.org
> *Subject:* Error when executing a WordCount Program
>
>
>
> Hello Hadoopers,
>
>
>
> Here is the error, I’m facing when running WordCount example program
> written by myself.
>
> Kind find attached the file of my WordCount program.
>
> Below the error.
>
>
>
>
> ===========================================================================================================================================
>
> *-bash-4.1$ bin/hadoop jar WordCount.jar*
>
> *Entr?e dans le programme MAIN !!!*
>
> *14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.*
>
> *14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).*
>
> *14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library*
>
> *14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not
> loaded*
>
> *14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001*
>
> *14/09/10 15:00:24 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
> *org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
> *        at
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)*
>
> *        at
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)*
>
> *        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)*
>
> *        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)*
>
> *        at java.security.AccessController.doPrivileged(Native Method)*
>
> *        at javax.security.auth.Subject.doAs(Subject.java:415)*
>
> *        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)*
>
> *        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)*
>
> *        at
> fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)*
>
> *        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
>
> *        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)*
>
> *        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
>
> *        at java.lang.reflect.Method.invoke(Method.java:601)*
>
> *        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)*
>
> *-bash-4.1$*
>
>
> ===========================================================================================================================================
>
>
>
> Thanks in advance for your help.
>
>
>
> Warm regards
>
> GYY
>
> *************************************************************************
> This message and any attachments (the "message") are confidential,
> intended solely for the addressee(s), and may contain legally privileged
> information.
> Any unauthorised use or dissemination is prohibited. E-mails are
> susceptible to alteration.
> Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall
> be liable for the message if altered, changed or
> falsified.
> Please visit http://swapdisclosure.sgcib.com for important information
> with respect to derivative products.
>                               ************
> Ce message et toutes les pieces jointes (ci-apres le "message") sont
> confidentiels et susceptibles de contenir des informations couvertes
> par le secret professionnel.
> Ce message est etabli a l'intention exclusive de ses destinataires. Toute
> utilisation ou diffusion non autorisee est interdite.
> Tout message electronique est susceptible d'alteration.
> La SOCIETE GENERALE et ses filiales declinent toute responsabilite au
> titre de ce message s'il a ete altere, deforme ou falsifie.
> Veuillez consulter le site http://swapdisclosure.sgcib.com afin de
> recueillir d'importantes informations sur les produits derives.
> *************************************************************************
>

Re: Explanation according to the output of a successful execution

Posted by Shahab Yunus <sh...@gmail.com>.
How did you fix it? And what is your question now?

Regards,
Shahab

On Mon, Sep 15, 2014 at 9:18 AM, YIMEN YIMGA Gael <
gael.yimen-yimga@sgcib.com> wrote:

> Hello Dear Hadoopers,
>
>
>
> Just to let you know that, finally I succeded fixing my issue this morning.
>
> Now, I would like to have more explanation according to the output of the
> running.
>
> Here is the output:
>
>
>
> ===
>
>
>
> -bash-4.1$ bin/hadoop jar WordCount.jar
>
> Entr?e dans le programme MAIN !!!
>
> 14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the sam.
>
> 14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
>
> 14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
>
> 14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to
> process : 17
>
> 14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
>
> 14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
>
> 14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
>
> 14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
>
> 14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
>
> 14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
>
> 14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
>
> 14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
>
> 14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
>
> 14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
>
> 14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
>
> 14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
>
> 14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
>
> 14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
>
> 14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
>
> 14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
>
> 14/09/15 15:01:16 INFO mapred.JobClient: Job complete:
> job_201409101141_0008
>
> 14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized
> bytes=31283
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=2796748800
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=3333201920
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=11883048960
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
>
> PASSE !!!
>
> -bash-4.1$ bin/hadoop jar WordCount.jar
>
>
>
> =====
>
>
>
> Thanks in advance for your help.
>
>
>
> GYY
>
>
>
> *From:* YIMEN YIMGA Gael ItecCsySat
> *Sent:* Thursday 11 September 2014 12:13
> *To:* 'user@hadoop.apache.org'
> *Subject:* RE: Error when executing a WordCount Program
>
>
>
> Hello dear all,
>
>
>
> Regarding the issue below, I succeded to fix the following warning
>
>
>
> *14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).*
>
>
>
> But the main error is still persisting.
>
>
>
> *14/09/10 15:00:24 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
>
>
> I have even put in comment the following block in my code, but the error
> is still the same.
>
>
>
> // config.set("fs.default.name", "hdfs://latdevweb02:9000/");
>
>             // config.set("mapred.job.tracker", "latdevweb02:9001");
>
>
>
> Please could you advise ????
>
>
>
> Standing by…
>
>
>
> GYY
>
>
>
> *From:* YIMEN YIMGA Gael ItecCsySat
> *Sent:* Wednesday 10 September 2014 15:10
> *To:* user@hadoop.apache.org
> *Subject:* Error when executing a WordCount Program
>
>
>
> Hello Hadoopers,
>
>
>
> Here is the error, I’m facing when running WordCount example program
> written by myself.
>
> Kind find attached the file of my WordCount program.
>
> Below the error.
>
>
>
>
> ===========================================================================================================================================
>
> *-bash-4.1$ bin/hadoop jar WordCount.jar*
>
> *Entr?e dans le programme MAIN !!!*
>
> *14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.*
>
> *14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).*
>
> *14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library*
>
> *14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not
> loaded*
>
> *14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001*
>
> *14/09/10 15:00:24 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
> *org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
> *        at
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)*
>
> *        at
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)*
>
> *        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)*
>
> *        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)*
>
> *        at java.security.AccessController.doPrivileged(Native Method)*
>
> *        at javax.security.auth.Subject.doAs(Subject.java:415)*
>
> *        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)*
>
> *        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)*
>
> *        at
> fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)*
>
> *        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
>
> *        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)*
>
> *        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
>
> *        at java.lang.reflect.Method.invoke(Method.java:601)*
>
> *        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)*
>
> *-bash-4.1$*
>
>
> ===========================================================================================================================================
>
>
>
> Thanks in advance for your help.
>
>
>
> Warm regards
>
> GYY
>
> *************************************************************************
> This message and any attachments (the "message") are confidential,
> intended solely for the addressee(s), and may contain legally privileged
> information.
> Any unauthorised use or dissemination is prohibited. E-mails are
> susceptible to alteration.
> Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall
> be liable for the message if altered, changed or
> falsified.
> Please visit http://swapdisclosure.sgcib.com for important information
> with respect to derivative products.
>                               ************
> Ce message et toutes les pieces jointes (ci-apres le "message") sont
> confidentiels et susceptibles de contenir des informations couvertes
> par le secret professionnel.
> Ce message est etabli a l'intention exclusive de ses destinataires. Toute
> utilisation ou diffusion non autorisee est interdite.
> Tout message electronique est susceptible d'alteration.
> La SOCIETE GENERALE et ses filiales declinent toute responsabilite au
> titre de ce message s'il a ete altere, deforme ou falsifie.
> Veuillez consulter le site http://swapdisclosure.sgcib.com afin de
> recueillir d'importantes informations sur les produits derives.
> *************************************************************************
>

Re: Explanation according to the output of a successful execution

Posted by Shahab Yunus <sh...@gmail.com>.
How did you fix it? And what is your question now?

Regards,
Shahab

On Mon, Sep 15, 2014 at 9:18 AM, YIMEN YIMGA Gael <
gael.yimen-yimga@sgcib.com> wrote:

> Hello Dear Hadoopers,
>
>
>
> Just to let you know that, finally I succeded fixing my issue this morning.
>
> Now, I would like to have more explanation according to the output of the
> running.
>
> Here is the output:
>
>
>
> ===
>
>
>
> -bash-4.1$ bin/hadoop jar WordCount.jar
>
> Entr?e dans le programme MAIN !!!
>
> 14/09/15 15:00:41 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the sam.
>
> 14/09/15 15:00:41 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library
>
> 14/09/15 15:00:41 WARN snappy.LoadSnappy: Snappy native library not loaded
>
> 14/09/15 15:00:41 INFO mapred.FileInputFormat: Total input paths to
> process : 17
>
> 14/09/15 15:00:42 INFO mapred.JobClient: Running job: job_201409101141_0008
>
> 14/09/15 15:00:43 INFO mapred.JobClient:  map 0% reduce 0%
>
> 14/09/15 15:00:50 INFO mapred.JobClient:  map 11% reduce 0%
>
> 14/09/15 15:00:53 INFO mapred.JobClient:  map 17% reduce 0%
>
> 14/09/15 15:00:54 INFO mapred.JobClient:  map 23% reduce 0%
>
> 14/09/15 15:00:56 INFO mapred.JobClient:  map 35% reduce 0%
>
> 14/09/15 15:00:59 INFO mapred.JobClient:  map 47% reduce 0%
>
> 14/09/15 15:01:02 INFO mapred.JobClient:  map 58% reduce 0%
>
> 14/09/15 15:01:04 INFO mapred.JobClient:  map 64% reduce 0%
>
> 14/09/15 15:01:05 INFO mapred.JobClient:  map 70% reduce 19%
>
> 14/09/15 15:01:07 INFO mapred.JobClient:  map 82% reduce 19%
>
> 14/09/15 15:01:09 INFO mapred.JobClient:  map 88% reduce 19%
>
> 14/09/15 15:01:10 INFO mapred.JobClient:  map 94% reduce 19%
>
> 14/09/15 15:01:11 INFO mapred.JobClient:  map 100% reduce 19%
>
> 14/09/15 15:01:14 INFO mapred.JobClient:  map 100% reduce 27%
>
> 14/09/15 15:01:15 INFO mapred.JobClient:  map 100% reduce 100%
>
> 14/09/15 15:01:16 INFO mapred.JobClient: Job complete:
> job_201409101141_0008
>
> 14/09/15 15:01:16 INFO mapred.JobClient: Counters: 29
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   Job Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Launched reduce tasks=1
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=47158
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Rack-local map tasks=17
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Launched map tasks=17
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=25510
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   File Input Format Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Read=34293
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   File Output Format Counters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Bytes Written=16014
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   FileSystemCounters
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_READ=65480
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     HDFS_BYTES_READ=1715
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=1080096
>
> 14/09/15 15:01:16 INFO mapred.JobClient:   Map-Reduce Framework
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output materialized
> bytes=31283
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map input records=956
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce shuffle bytes=31283
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Spilled Records=3154
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output bytes=46384
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=2796748800
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     CPU time spent (ms)=7520
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map input bytes=34293
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     SPLIT_RAW_BYTES=1715
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Combine input records=3435
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input records=1577
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce input groups=820
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Combine output records=1577
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=3333201920
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Reduce output records=820
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=11883048960
>
> 14/09/15 15:01:16 INFO mapred.JobClient:     Map output records=3435
>
> PASSE !!!
>
> -bash-4.1$ bin/hadoop jar WordCount.jar
>
>
>
> =====
>
>
>
> Thanks in advance for your help.
>
>
>
> GYY
>
>
>
> *From:* YIMEN YIMGA Gael ItecCsySat
> *Sent:* Thursday 11 September 2014 12:13
> *To:* 'user@hadoop.apache.org'
> *Subject:* RE: Error when executing a WordCount Program
>
>
>
> Hello dear all,
>
>
>
> Regarding the issue below, I succeded to fix the following warning
>
>
>
> *14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).*
>
>
>
> But the main error is still persisting.
>
>
>
> *14/09/10 15:00:24 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
>
>
> I have even put in comment the following block in my code, but the error
> is still the same.
>
>
>
> // config.set("fs.default.name", "hdfs://latdevweb02:9000/");
>
>             // config.set("mapred.job.tracker", "latdevweb02:9001");
>
>
>
> Please could you advise ????
>
>
>
> Standing by…
>
>
>
> GYY
>
>
>
> *From:* YIMEN YIMGA Gael ItecCsySat
> *Sent:* Wednesday 10 September 2014 15:10
> *To:* user@hadoop.apache.org
> *Subject:* Error when executing a WordCount Program
>
>
>
> Hello Hadoopers,
>
>
>
> Here is the error, I’m facing when running WordCount example program
> written by myself.
>
> Kind find attached the file of my WordCount program.
>
> Below the error.
>
>
>
>
> ===========================================================================================================================================
>
> *-bash-4.1$ bin/hadoop jar WordCount.jar*
>
> *Entr?e dans le programme MAIN !!!*
>
> *14/09/10 15:00:24 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.*
>
> *14/09/10 15:00:24 WARN mapred.JobClient: No job jar file set.  User
> classes may not be found. See JobConf(Class) or JobConf#setJar(String).*
>
> *14/09/10 15:00:24 INFO util.NativeCodeLoader: Loaded the native-hadoop
> library*
>
> *14/09/10 15:00:24 WARN snappy.LoadSnappy: Snappy native library not
> loaded*
>
> *14/09/10 15:00:24 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://latdevweb02:9000/user/hadoop/.staging/job_201409101141_0001*
>
> *14/09/10 15:00:24 ERROR security.UserGroupInformation:
> PriviledgedActionException as:hadoop
> cause:org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
> *org.apache.hadoop.mapred.InvalidInputException: Input path does not
> exist: hdfs://latdevweb02:9000/home/hadoop/hadoop/input*
>
> *        at
> org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:197)*
>
> *        at
> org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:208)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.writeOldSplits(JobClient.java:1081)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1073)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)*
>
> *        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)*
>
> *        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)*
>
> *        at java.security.AccessController.doPrivileged(Native Method)*
>
> *        at javax.security.auth.Subject.doAs(Subject.java:415)*
>
> *        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)*
>
> *        at
> org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:910)*
>
> *        at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1353)*
>
> *        at
> fr.societegenerale.bigdata.lactool.WordCountDriver.main(WordCountDriver.java:50)*
>
> *        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
>
> *        at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)*
>
> *        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
>
> *        at java.lang.reflect.Method.invoke(Method.java:601)*
>
> *        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)*
>
> *-bash-4.1$*
>
>
> ===========================================================================================================================================
>
>
>
> Thanks in advance for your help.
>
>
>
> Warm regards
>
> GYY
>
> *************************************************************************
> This message and any attachments (the "message") are confidential,
> intended solely for the addressee(s), and may contain legally privileged
> information.
> Any unauthorised use or dissemination is prohibited. E-mails are
> susceptible to alteration.
> Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall
> be liable for the message if altered, changed or
> falsified.
> Please visit http://swapdisclosure.sgcib.com for important information
> with respect to derivative products.
>                               ************
> Ce message et toutes les pieces jointes (ci-apres le "message") sont
> confidentiels et susceptibles de contenir des informations couvertes
> par le secret professionnel.
> Ce message est etabli a l'intention exclusive de ses destinataires. Toute
> utilisation ou diffusion non autorisee est interdite.
> Tout message electronique est susceptible d'alteration.
> La SOCIETE GENERALE et ses filiales declinent toute responsabilite au
> titre de ce message s'il a ete altere, deforme ou falsifie.
> Veuillez consulter le site http://swapdisclosure.sgcib.com afin de
> recueillir d'importantes informations sur les produits derives.
> *************************************************************************
>