You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by EdwardKing <zh...@neusoft.com> on 2014/06/23 08:36:17 UTC

Where is hdfs result ?

 I use hadoop 2.2.0 under Redhat Linux,I run hadoop mapreduce examples like follows:

[yarn@localhost sbin]$ hadoop jar /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory -libjars /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar 16 10000
Number of Maps  = 16
Samples per Map = 10000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Wrote input for Map #10
Wrote input for Map #11
Wrote input for Map #12
Wrote input for Map #13
Wrote input for Map #14
Wrote input for Map #15
Starting Job
14/06/22 19:57:58 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/06/22 19:57:59 INFO input.FileInputFormat: Total input paths to process : 16
14/06/22 19:57:59 INFO mapreduce.JobSubmitter: number of splits:16
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.classpath.files is deprecated. Instead, use mapreduce.job.classpath.files
14/06/22 19:57:59 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.used.genericoptionsparser is deprecated. Instead, use mapreduce.client.genericoptionsparser.used
14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use mapreduce.job.inputformat.class
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use mapreduce.job.outputformat.class
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/06/22 19:57:59 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/06/22 19:58:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1403492007236_0001
14/06/22 19:58:01 INFO impl.YarnClientImpl: Submitted application application_1403492007236_0001 to ResourceManager at /0.0.0.0:8032
14/06/22 19:58:01 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1403492007236_0001/
14/06/22 19:58:01 INFO mapreduce.Job: Running job: job_1403492007236_0001
14/06/22 19:58:22 INFO mapreduce.Job: Job job_1403492007236_0001 running in uber mode : false
14/06/22 19:58:22 INFO mapreduce.Job:  map 0% reduce 0%
14/06/22 20:01:45 INFO mapreduce.Job:  map 6% reduce 0%
14/06/22 20:01:52 INFO mapreduce.Job:  map 13% reduce 0%
14/06/22 20:01:53 INFO mapreduce.Job:  map 19% reduce 0%
14/06/22 20:01:56 INFO mapreduce.Job:  map 38% reduce 0%
14/06/22 20:02:36 INFO mapreduce.Job:  map 50% reduce 15%
14/06/22 20:02:37 INFO mapreduce.Job:  map 69% reduce 15%
14/06/22 20:02:39 INFO mapreduce.Job:  map 69% reduce 23%
14/06/22 20:03:05 INFO mapreduce.Job:  map 94% reduce 23%
14/06/22 20:03:06 INFO mapreduce.Job:  map 100% reduce 31%
14/06/22 20:03:08 INFO mapreduce.Job:  map 100% reduce 100%
14/06/22 20:03:09 INFO mapreduce.Job: Job job_1403492007236_0001 completed successfully
14/06/22 20:03:09 INFO mapreduce.Job: Counters: 43
 File System Counters
  FILE: Number of bytes read=358
  FILE: Number of bytes written=1387065
  FILE: Number of read operations=0
  FILE: Number of large read operations=0
  FILE: Number of write operations=0
  HDFS: Number of bytes read=4230
  HDFS: Number of bytes written=215
  HDFS: Number of read operations=67
  HDFS: Number of large read operations=0
  HDFS: Number of write operations=3
 Job Counters 
  Launched map tasks=16
  Launched reduce tasks=1
  Data-local map tasks=16
  Total time spent by all maps in occupied slots (ms)=1587720
  Total time spent by all reduces in occupied slots (ms)=71305
 Map-Reduce Framework
  Map input records=16
  Map output records=32
  Map output bytes=288
  Map output materialized bytes=448
  Input split bytes=2342
  Combine input records=0
  Combine output records=0
  Reduce input groups=2
  Reduce shuffle bytes=448
  Reduce input records=32
  Reduce output records=0
  Spilled Records=64
  Shuffled Maps =16
  Failed Shuffles=0
  Merged Map outputs=16
  GC time elapsed (ms)=45172
  CPU time spent (ms)=107260
  Physical memory (bytes) snapshot=2563190784
  Virtual memory (bytes) snapshot=6111346688
  Total committed heap usage (bytes)=1955135488
 Shuffle Errors
  BAD_ID=0
  CONNECTION=0
  IO_ERROR=0
  WRONG_LENGTH=0
  WRONG_MAP=0
  WRONG_REDUCE=0
 File Input Format Counters 
  Bytes Read=1888
 File Output Format Counters 
  Bytes Written=97
Job Finished in 311.613 seconds
Estimated value of Pi is 3.14127500000000000000


Then I check hdfs like follows:
[yarn@localhost sbin]$ hdfs dfs -ls 
[yarn@localhost sbin]

Why hdfs don't show any information? How to do it? Thanks

---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) 
is intended only for the use of the intended recipient and may be confidential and/or privileged of 
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is 
not the intended recipient, unauthorized use, forwarding, printing,  storing, disclosure or copying 
is strictly prohibited, and may be unlawful.If you have received this communication in error,please 
immediately notify the sender by return e-mail, and delete the original message and all copies from 
your system. Thank you. 
---------------------------------------------------------------------------------------------------

Re: Where is hdfs result ?

Posted by Harsh J <ha...@cloudera.com>.
The pi job does not leave any HDFS data. You can see its result on the
console: "Estimated value of Pi is 3.14127500000000000000".

What are you trying to really test or do here? If you want data on
HDFS, you can load it up via 'hadoop fs -put', or generate it with
teragen, etc.?

On Mon, Jun 23, 2014 at 12:06 PM, EdwardKing <zh...@neusoft.com> wrote:
>  I use hadoop 2.2.0 under Redhat Linux,I run hadoop mapreduce examples like
> follows:
>
> [yarn@localhost sbin]$ hadoop jar
> /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
> pi
> -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
> -libjars
> /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar
> 16 10000
> Number of Maps  = 16
> Samples per Map = 10000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Wrote input for Map #10
> Wrote input for Map #11
> Wrote input for Map #12
> Wrote input for Map #13
> Wrote input for Map #14
> Wrote input for Map #15
> Starting Job
> 14/06/22 19:57:58 INFO client.RMProxy: Connecting to ResourceManager at
> /0.0.0.0:8032
> 14/06/22 19:57:59 INFO input.FileInputFormat: Total input paths to process :
> 16
> 14/06/22 19:57:59 INFO mapreduce.JobSubmitter: number of splits:16
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.classpath.files
> is deprecated. Instead, use mapreduce.job.classpath.files
> 14/06/22 19:57:59 INFO Configuration.deprecation: user.name is deprecated.
> Instead, use mapreduce.job.user.name
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.cache.files.filesizes is deprecated. Instead, use
> mapreduce.job.cache.files.filesizes
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.cache.files is
> deprecated. Instead, use mapreduce.job.cache.files
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.map.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.map.speculative
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.value.class
> is deprecated. Instead, use mapreduce.job.output.value.class
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.map.class is
> deprecated. Instead, use mapreduce.job.map.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.name is
> deprecated. Instead, use mapreduce.job.name
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.reduce.class is
> deprecated. Instead, use mapreduce.job.reduce.class
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapreduce.inputformat.class is deprecated. Instead, use
> mapreduce.job.inputformat.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.input.dir is
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapreduce.outputformat.class is deprecated. Instead, use
> mapreduce.job.outputformat.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.map.tasks is
> deprecated. Instead, use mapreduce.job.maps
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.cache.files.timestamps is deprecated. Instead, use
> mapreduce.job.cache.files.timestamps
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.key.class is
> deprecated. Instead, use mapreduce.job.output.key.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
> 14/06/22 19:58:00 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1403492007236_0001
> 14/06/22 19:58:01 INFO impl.YarnClientImpl: Submitted application
> application_1403492007236_0001 to ResourceManager at /0.0.0.0:8032
> 14/06/22 19:58:01 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1403492007236_0001/
> 14/06/22 19:58:01 INFO mapreduce.Job: Running job: job_1403492007236_0001
> 14/06/22 19:58:22 INFO mapreduce.Job: Job job_1403492007236_0001 running in
> uber mode : false
> 14/06/22 19:58:22 INFO mapreduce.Job:  map 0% reduce 0%
> 14/06/22 20:01:45 INFO mapreduce.Job:  map 6% reduce 0%
> 14/06/22 20:01:52 INFO mapreduce.Job:  map 13% reduce 0%
> 14/06/22 20:01:53 INFO mapreduce.Job:  map 19% reduce 0%
> 14/06/22 20:01:56 INFO mapreduce.Job:  map 38% reduce 0%
> 14/06/22 20:02:36 INFO mapreduce.Job:  map 50% reduce 15%
> 14/06/22 20:02:37 INFO mapreduce.Job:  map 69% reduce 15%
> 14/06/22 20:02:39 INFO mapreduce.Job:  map 69% reduce 23%
> 14/06/22 20:03:05 INFO mapreduce.Job:  map 94% reduce 23%
> 14/06/22 20:03:06 INFO mapreduce.Job:  map 100% reduce 31%
> 14/06/22 20:03:08 INFO mapreduce.Job:  map 100% reduce 100%
> 14/06/22 20:03:09 INFO mapreduce.Job: Job job_1403492007236_0001 completed
> successfully
> 14/06/22 20:03:09 INFO mapreduce.Job: Counters: 43
>  File System Counters
>   FILE: Number of bytes read=358
>   FILE: Number of bytes written=1387065
>   FILE: Number of read operations=0
>   FILE: Number of large read operations=0
>   FILE: Number of write operations=0
>   HDFS: Number of bytes read=4230
>   HDFS: Number of bytes written=215
>   HDFS: Number of read operations=67
>   HDFS: Number of large read operations=0
>   HDFS: Number of write operations=3
>  Job Counters
>   Launched map tasks=16
>   Launched reduce tasks=1
>   Data-local map tasks=16
>   Total time spent by all maps in occupied slots (ms)=1587720
>   Total time spent by all reduces in occupied slots (ms)=71305
>  Map-Reduce Framework
>   Map input records=16
>   Map output records=32
>   Map output bytes=288
>   Map output materialized bytes=448
>   Input split bytes=2342
>   Combine input records=0
>   Combine output records=0
>   Reduce input groups=2
>   Reduce shuffle bytes=448
>   Reduce input records=32
>   Reduce output records=0
>   Spilled Records=64
>   Shuffled Maps =16
>   Failed Shuffles=0
>   Merged Map outputs=16
>   GC time elapsed (ms)=45172
>   CPU time spent (ms)=107260
>   Physical memory (bytes) snapshot=2563190784
>   Virtual memory (bytes) snapshot=6111346688
>   Total committed heap usage (bytes)=1955135488
>  Shuffle Errors
>   BAD_ID=0
>   CONNECTION=0
>   IO_ERROR=0
>   WRONG_LENGTH=0
>   WRONG_MAP=0
>   WRONG_REDUCE=0
>  File Input Format Counters
>   Bytes Read=1888
>  File Output Format Counters
>   Bytes Written=97
> Job Finished in 311.613 seconds
> Estimated value of Pi is 3.14127500000000000000
>
> Then I check hdfs like follows:
> [yarn@localhost sbin]$ hdfs dfs -ls
> [yarn@localhost sbin]
>
> Why hdfs don't show any information? How to do it? Thanks
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> ---------------------------------------------------------------------------------------------------



-- 
Harsh J

Re: Where is hdfs result ?

Posted by Harsh J <ha...@cloudera.com>.
The pi job does not leave any HDFS data. You can see its result on the
console: "Estimated value of Pi is 3.14127500000000000000".

What are you trying to really test or do here? If you want data on
HDFS, you can load it up via 'hadoop fs -put', or generate it with
teragen, etc.?

On Mon, Jun 23, 2014 at 12:06 PM, EdwardKing <zh...@neusoft.com> wrote:
>  I use hadoop 2.2.0 under Redhat Linux,I run hadoop mapreduce examples like
> follows:
>
> [yarn@localhost sbin]$ hadoop jar
> /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
> pi
> -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
> -libjars
> /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar
> 16 10000
> Number of Maps  = 16
> Samples per Map = 10000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Wrote input for Map #10
> Wrote input for Map #11
> Wrote input for Map #12
> Wrote input for Map #13
> Wrote input for Map #14
> Wrote input for Map #15
> Starting Job
> 14/06/22 19:57:58 INFO client.RMProxy: Connecting to ResourceManager at
> /0.0.0.0:8032
> 14/06/22 19:57:59 INFO input.FileInputFormat: Total input paths to process :
> 16
> 14/06/22 19:57:59 INFO mapreduce.JobSubmitter: number of splits:16
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.classpath.files
> is deprecated. Instead, use mapreduce.job.classpath.files
> 14/06/22 19:57:59 INFO Configuration.deprecation: user.name is deprecated.
> Instead, use mapreduce.job.user.name
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.cache.files.filesizes is deprecated. Instead, use
> mapreduce.job.cache.files.filesizes
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.cache.files is
> deprecated. Instead, use mapreduce.job.cache.files
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.map.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.map.speculative
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.value.class
> is deprecated. Instead, use mapreduce.job.output.value.class
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.map.class is
> deprecated. Instead, use mapreduce.job.map.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.name is
> deprecated. Instead, use mapreduce.job.name
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.reduce.class is
> deprecated. Instead, use mapreduce.job.reduce.class
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapreduce.inputformat.class is deprecated. Instead, use
> mapreduce.job.inputformat.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.input.dir is
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapreduce.outputformat.class is deprecated. Instead, use
> mapreduce.job.outputformat.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.map.tasks is
> deprecated. Instead, use mapreduce.job.maps
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.cache.files.timestamps is deprecated. Instead, use
> mapreduce.job.cache.files.timestamps
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.key.class is
> deprecated. Instead, use mapreduce.job.output.key.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
> 14/06/22 19:58:00 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1403492007236_0001
> 14/06/22 19:58:01 INFO impl.YarnClientImpl: Submitted application
> application_1403492007236_0001 to ResourceManager at /0.0.0.0:8032
> 14/06/22 19:58:01 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1403492007236_0001/
> 14/06/22 19:58:01 INFO mapreduce.Job: Running job: job_1403492007236_0001
> 14/06/22 19:58:22 INFO mapreduce.Job: Job job_1403492007236_0001 running in
> uber mode : false
> 14/06/22 19:58:22 INFO mapreduce.Job:  map 0% reduce 0%
> 14/06/22 20:01:45 INFO mapreduce.Job:  map 6% reduce 0%
> 14/06/22 20:01:52 INFO mapreduce.Job:  map 13% reduce 0%
> 14/06/22 20:01:53 INFO mapreduce.Job:  map 19% reduce 0%
> 14/06/22 20:01:56 INFO mapreduce.Job:  map 38% reduce 0%
> 14/06/22 20:02:36 INFO mapreduce.Job:  map 50% reduce 15%
> 14/06/22 20:02:37 INFO mapreduce.Job:  map 69% reduce 15%
> 14/06/22 20:02:39 INFO mapreduce.Job:  map 69% reduce 23%
> 14/06/22 20:03:05 INFO mapreduce.Job:  map 94% reduce 23%
> 14/06/22 20:03:06 INFO mapreduce.Job:  map 100% reduce 31%
> 14/06/22 20:03:08 INFO mapreduce.Job:  map 100% reduce 100%
> 14/06/22 20:03:09 INFO mapreduce.Job: Job job_1403492007236_0001 completed
> successfully
> 14/06/22 20:03:09 INFO mapreduce.Job: Counters: 43
>  File System Counters
>   FILE: Number of bytes read=358
>   FILE: Number of bytes written=1387065
>   FILE: Number of read operations=0
>   FILE: Number of large read operations=0
>   FILE: Number of write operations=0
>   HDFS: Number of bytes read=4230
>   HDFS: Number of bytes written=215
>   HDFS: Number of read operations=67
>   HDFS: Number of large read operations=0
>   HDFS: Number of write operations=3
>  Job Counters
>   Launched map tasks=16
>   Launched reduce tasks=1
>   Data-local map tasks=16
>   Total time spent by all maps in occupied slots (ms)=1587720
>   Total time spent by all reduces in occupied slots (ms)=71305
>  Map-Reduce Framework
>   Map input records=16
>   Map output records=32
>   Map output bytes=288
>   Map output materialized bytes=448
>   Input split bytes=2342
>   Combine input records=0
>   Combine output records=0
>   Reduce input groups=2
>   Reduce shuffle bytes=448
>   Reduce input records=32
>   Reduce output records=0
>   Spilled Records=64
>   Shuffled Maps =16
>   Failed Shuffles=0
>   Merged Map outputs=16
>   GC time elapsed (ms)=45172
>   CPU time spent (ms)=107260
>   Physical memory (bytes) snapshot=2563190784
>   Virtual memory (bytes) snapshot=6111346688
>   Total committed heap usage (bytes)=1955135488
>  Shuffle Errors
>   BAD_ID=0
>   CONNECTION=0
>   IO_ERROR=0
>   WRONG_LENGTH=0
>   WRONG_MAP=0
>   WRONG_REDUCE=0
>  File Input Format Counters
>   Bytes Read=1888
>  File Output Format Counters
>   Bytes Written=97
> Job Finished in 311.613 seconds
> Estimated value of Pi is 3.14127500000000000000
>
> Then I check hdfs like follows:
> [yarn@localhost sbin]$ hdfs dfs -ls
> [yarn@localhost sbin]
>
> Why hdfs don't show any information? How to do it? Thanks
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> ---------------------------------------------------------------------------------------------------



-- 
Harsh J

Re: Where is hdfs result ?

Posted by Harsh J <ha...@cloudera.com>.
The pi job does not leave any HDFS data. You can see its result on the
console: "Estimated value of Pi is 3.14127500000000000000".

What are you trying to really test or do here? If you want data on
HDFS, you can load it up via 'hadoop fs -put', or generate it with
teragen, etc.?

On Mon, Jun 23, 2014 at 12:06 PM, EdwardKing <zh...@neusoft.com> wrote:
>  I use hadoop 2.2.0 under Redhat Linux,I run hadoop mapreduce examples like
> follows:
>
> [yarn@localhost sbin]$ hadoop jar
> /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
> pi
> -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
> -libjars
> /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar
> 16 10000
> Number of Maps  = 16
> Samples per Map = 10000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Wrote input for Map #10
> Wrote input for Map #11
> Wrote input for Map #12
> Wrote input for Map #13
> Wrote input for Map #14
> Wrote input for Map #15
> Starting Job
> 14/06/22 19:57:58 INFO client.RMProxy: Connecting to ResourceManager at
> /0.0.0.0:8032
> 14/06/22 19:57:59 INFO input.FileInputFormat: Total input paths to process :
> 16
> 14/06/22 19:57:59 INFO mapreduce.JobSubmitter: number of splits:16
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.classpath.files
> is deprecated. Instead, use mapreduce.job.classpath.files
> 14/06/22 19:57:59 INFO Configuration.deprecation: user.name is deprecated.
> Instead, use mapreduce.job.user.name
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.cache.files.filesizes is deprecated. Instead, use
> mapreduce.job.cache.files.filesizes
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.cache.files is
> deprecated. Instead, use mapreduce.job.cache.files
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.map.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.map.speculative
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.value.class
> is deprecated. Instead, use mapreduce.job.output.value.class
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.map.class is
> deprecated. Instead, use mapreduce.job.map.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.name is
> deprecated. Instead, use mapreduce.job.name
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.reduce.class is
> deprecated. Instead, use mapreduce.job.reduce.class
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapreduce.inputformat.class is deprecated. Instead, use
> mapreduce.job.inputformat.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.input.dir is
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapreduce.outputformat.class is deprecated. Instead, use
> mapreduce.job.outputformat.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.map.tasks is
> deprecated. Instead, use mapreduce.job.maps
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.cache.files.timestamps is deprecated. Instead, use
> mapreduce.job.cache.files.timestamps
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.key.class is
> deprecated. Instead, use mapreduce.job.output.key.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
> 14/06/22 19:58:00 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1403492007236_0001
> 14/06/22 19:58:01 INFO impl.YarnClientImpl: Submitted application
> application_1403492007236_0001 to ResourceManager at /0.0.0.0:8032
> 14/06/22 19:58:01 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1403492007236_0001/
> 14/06/22 19:58:01 INFO mapreduce.Job: Running job: job_1403492007236_0001
> 14/06/22 19:58:22 INFO mapreduce.Job: Job job_1403492007236_0001 running in
> uber mode : false
> 14/06/22 19:58:22 INFO mapreduce.Job:  map 0% reduce 0%
> 14/06/22 20:01:45 INFO mapreduce.Job:  map 6% reduce 0%
> 14/06/22 20:01:52 INFO mapreduce.Job:  map 13% reduce 0%
> 14/06/22 20:01:53 INFO mapreduce.Job:  map 19% reduce 0%
> 14/06/22 20:01:56 INFO mapreduce.Job:  map 38% reduce 0%
> 14/06/22 20:02:36 INFO mapreduce.Job:  map 50% reduce 15%
> 14/06/22 20:02:37 INFO mapreduce.Job:  map 69% reduce 15%
> 14/06/22 20:02:39 INFO mapreduce.Job:  map 69% reduce 23%
> 14/06/22 20:03:05 INFO mapreduce.Job:  map 94% reduce 23%
> 14/06/22 20:03:06 INFO mapreduce.Job:  map 100% reduce 31%
> 14/06/22 20:03:08 INFO mapreduce.Job:  map 100% reduce 100%
> 14/06/22 20:03:09 INFO mapreduce.Job: Job job_1403492007236_0001 completed
> successfully
> 14/06/22 20:03:09 INFO mapreduce.Job: Counters: 43
>  File System Counters
>   FILE: Number of bytes read=358
>   FILE: Number of bytes written=1387065
>   FILE: Number of read operations=0
>   FILE: Number of large read operations=0
>   FILE: Number of write operations=0
>   HDFS: Number of bytes read=4230
>   HDFS: Number of bytes written=215
>   HDFS: Number of read operations=67
>   HDFS: Number of large read operations=0
>   HDFS: Number of write operations=3
>  Job Counters
>   Launched map tasks=16
>   Launched reduce tasks=1
>   Data-local map tasks=16
>   Total time spent by all maps in occupied slots (ms)=1587720
>   Total time spent by all reduces in occupied slots (ms)=71305
>  Map-Reduce Framework
>   Map input records=16
>   Map output records=32
>   Map output bytes=288
>   Map output materialized bytes=448
>   Input split bytes=2342
>   Combine input records=0
>   Combine output records=0
>   Reduce input groups=2
>   Reduce shuffle bytes=448
>   Reduce input records=32
>   Reduce output records=0
>   Spilled Records=64
>   Shuffled Maps =16
>   Failed Shuffles=0
>   Merged Map outputs=16
>   GC time elapsed (ms)=45172
>   CPU time spent (ms)=107260
>   Physical memory (bytes) snapshot=2563190784
>   Virtual memory (bytes) snapshot=6111346688
>   Total committed heap usage (bytes)=1955135488
>  Shuffle Errors
>   BAD_ID=0
>   CONNECTION=0
>   IO_ERROR=0
>   WRONG_LENGTH=0
>   WRONG_MAP=0
>   WRONG_REDUCE=0
>  File Input Format Counters
>   Bytes Read=1888
>  File Output Format Counters
>   Bytes Written=97
> Job Finished in 311.613 seconds
> Estimated value of Pi is 3.14127500000000000000
>
> Then I check hdfs like follows:
> [yarn@localhost sbin]$ hdfs dfs -ls
> [yarn@localhost sbin]
>
> Why hdfs don't show any information? How to do it? Thanks
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> ---------------------------------------------------------------------------------------------------



-- 
Harsh J

Re: Where is hdfs result ?

Posted by Harsh J <ha...@cloudera.com>.
The pi job does not leave any HDFS data. You can see its result on the
console: "Estimated value of Pi is 3.14127500000000000000".

What are you trying to really test or do here? If you want data on
HDFS, you can load it up via 'hadoop fs -put', or generate it with
teragen, etc.?

On Mon, Jun 23, 2014 at 12:06 PM, EdwardKing <zh...@neusoft.com> wrote:
>  I use hadoop 2.2.0 under Redhat Linux,I run hadoop mapreduce examples like
> follows:
>
> [yarn@localhost sbin]$ hadoop jar
> /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar
> pi
> -Dmapreduce.clientfactory.class.name=org.apache.hadoop.mapred.YarnClientFactory
> -libjars
> /opt/yarn/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar
> 16 10000
> Number of Maps  = 16
> Samples per Map = 10000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Wrote input for Map #5
> Wrote input for Map #6
> Wrote input for Map #7
> Wrote input for Map #8
> Wrote input for Map #9
> Wrote input for Map #10
> Wrote input for Map #11
> Wrote input for Map #12
> Wrote input for Map #13
> Wrote input for Map #14
> Wrote input for Map #15
> Starting Job
> 14/06/22 19:57:58 INFO client.RMProxy: Connecting to ResourceManager at
> /0.0.0.0:8032
> 14/06/22 19:57:59 INFO input.FileInputFormat: Total input paths to process :
> 16
> 14/06/22 19:57:59 INFO mapreduce.JobSubmitter: number of splits:16
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.classpath.files
> is deprecated. Instead, use mapreduce.job.classpath.files
> 14/06/22 19:57:59 INFO Configuration.deprecation: user.name is deprecated.
> Instead, use mapreduce.job.user.name
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.jar is deprecated.
> Instead, use mapreduce.job.jar
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.cache.files.filesizes is deprecated. Instead, use
> mapreduce.job.cache.files.filesizes
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.cache.files is
> deprecated. Instead, use mapreduce.job.cache.files
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.map.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.map.speculative
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.reduce.tasks is
> deprecated. Instead, use mapreduce.job.reduces
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.value.class
> is deprecated. Instead, use mapreduce.job.output.value.class
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
> mapreduce.reduce.speculative
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.map.class is
> deprecated. Instead, use mapreduce.job.map.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.job.name is
> deprecated. Instead, use mapreduce.job.name
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapreduce.reduce.class is
> deprecated. Instead, use mapreduce.job.reduce.class
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapreduce.inputformat.class is deprecated. Instead, use
> mapreduce.job.inputformat.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.input.dir is
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapreduce.outputformat.class is deprecated. Instead, use
> mapreduce.job.outputformat.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.map.tasks is
> deprecated. Instead, use mapreduce.job.maps
> 14/06/22 19:57:59 INFO Configuration.deprecation:
> mapred.cache.files.timestamps is deprecated. Instead, use
> mapreduce.job.cache.files.timestamps
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.output.key.class is
> deprecated. Instead, use mapreduce.job.output.key.class
> 14/06/22 19:57:59 INFO Configuration.deprecation: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
> 14/06/22 19:58:00 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1403492007236_0001
> 14/06/22 19:58:01 INFO impl.YarnClientImpl: Submitted application
> application_1403492007236_0001 to ResourceManager at /0.0.0.0:8032
> 14/06/22 19:58:01 INFO mapreduce.Job: The url to track the job:
> http://localhost:8088/proxy/application_1403492007236_0001/
> 14/06/22 19:58:01 INFO mapreduce.Job: Running job: job_1403492007236_0001
> 14/06/22 19:58:22 INFO mapreduce.Job: Job job_1403492007236_0001 running in
> uber mode : false
> 14/06/22 19:58:22 INFO mapreduce.Job:  map 0% reduce 0%
> 14/06/22 20:01:45 INFO mapreduce.Job:  map 6% reduce 0%
> 14/06/22 20:01:52 INFO mapreduce.Job:  map 13% reduce 0%
> 14/06/22 20:01:53 INFO mapreduce.Job:  map 19% reduce 0%
> 14/06/22 20:01:56 INFO mapreduce.Job:  map 38% reduce 0%
> 14/06/22 20:02:36 INFO mapreduce.Job:  map 50% reduce 15%
> 14/06/22 20:02:37 INFO mapreduce.Job:  map 69% reduce 15%
> 14/06/22 20:02:39 INFO mapreduce.Job:  map 69% reduce 23%
> 14/06/22 20:03:05 INFO mapreduce.Job:  map 94% reduce 23%
> 14/06/22 20:03:06 INFO mapreduce.Job:  map 100% reduce 31%
> 14/06/22 20:03:08 INFO mapreduce.Job:  map 100% reduce 100%
> 14/06/22 20:03:09 INFO mapreduce.Job: Job job_1403492007236_0001 completed
> successfully
> 14/06/22 20:03:09 INFO mapreduce.Job: Counters: 43
>  File System Counters
>   FILE: Number of bytes read=358
>   FILE: Number of bytes written=1387065
>   FILE: Number of read operations=0
>   FILE: Number of large read operations=0
>   FILE: Number of write operations=0
>   HDFS: Number of bytes read=4230
>   HDFS: Number of bytes written=215
>   HDFS: Number of read operations=67
>   HDFS: Number of large read operations=0
>   HDFS: Number of write operations=3
>  Job Counters
>   Launched map tasks=16
>   Launched reduce tasks=1
>   Data-local map tasks=16
>   Total time spent by all maps in occupied slots (ms)=1587720
>   Total time spent by all reduces in occupied slots (ms)=71305
>  Map-Reduce Framework
>   Map input records=16
>   Map output records=32
>   Map output bytes=288
>   Map output materialized bytes=448
>   Input split bytes=2342
>   Combine input records=0
>   Combine output records=0
>   Reduce input groups=2
>   Reduce shuffle bytes=448
>   Reduce input records=32
>   Reduce output records=0
>   Spilled Records=64
>   Shuffled Maps =16
>   Failed Shuffles=0
>   Merged Map outputs=16
>   GC time elapsed (ms)=45172
>   CPU time spent (ms)=107260
>   Physical memory (bytes) snapshot=2563190784
>   Virtual memory (bytes) snapshot=6111346688
>   Total committed heap usage (bytes)=1955135488
>  Shuffle Errors
>   BAD_ID=0
>   CONNECTION=0
>   IO_ERROR=0
>   WRONG_LENGTH=0
>   WRONG_MAP=0
>   WRONG_REDUCE=0
>  File Input Format Counters
>   Bytes Read=1888
>  File Output Format Counters
>   Bytes Written=97
> Job Finished in 311.613 seconds
> Estimated value of Pi is 3.14127500000000000000
>
> Then I check hdfs like follows:
> [yarn@localhost sbin]$ hdfs dfs -ls
> [yarn@localhost sbin]
>
> Why hdfs don't show any information? How to do it? Thanks
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
> ---------------------------------------------------------------------------------------------------



-- 
Harsh J