You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Li Li <fa...@gmail.com> on 2014/02/12 07:11:09 UTC

Compression codec com.hadoop.compression.lzo.LzoCodec not found

I am runing example of wordcout but encount an exception:
I googled and know lzo compression's license is incompatible with apache's
so it's not built in.
the question is I am using default configuration of hadoop 1.2.1, why
it need lzo?
anothe question is, what's Cleaning up the staging area mean?


./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt /lili/test

14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process : 1
14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
java.lang.IllegalArgumentException: Compression codec
com.hadoop.compression.lzo.LzoCodec not found.
        at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
        at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
        at org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
        at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
        at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
        at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
        at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.lang.ClassNotFoundException: com.hadoop.compression.lzo.LzoCodec
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:264)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
        at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Li Li <fa...@gmail.com>.
thanks. it's correct now.

On Thu, Feb 13, 2014 at 9:37 AM, Ted Yu <yu...@gmail.com> wrote:
> Please remove LzoCodec from config.
>
> Cheers
>
> On Feb 12, 2014, at 5:12 PM, Li Li <fa...@gmail.com> wrote:
>
>> <property>
>>  <name>io.compression.codecs</name>
>>  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>>  <description>A list of the compression codec classes that can be used
>>               for compression/decompression.</description>
>> </property>
>>
>> <property>
>> <name>io.compression.codec.lzo.class</name>
>> <value>com.hadoop.compression.lzo.LzoCodec</value>
>> </property>
>>
>> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
>>> What's the value for "io.compression.codecs" config parameter ?
>>>
>>> Thanks
>>>
>>>
>>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>>>
>>>> I am runing example of wordcout but encount an exception:
>>>> I googled and know lzo compression's license is incompatible with apache's
>>>> so it's not built in.
>>>> the question is I am using default configuration of hadoop 1.2.1, why
>>>> it need lzo?
>>>> anothe question is, what's Cleaning up the staging area mean?
>>>>
>>>>
>>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>>> /lili/test
>>>>
>>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>>> : 1
>>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>>>
>>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>>> java.lang.IllegalArgumentException: Compression codec
>>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>>        at
>>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>>        at
>>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>        at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>        at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>>        at
>>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>>        at
>>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>>        at
>>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>        at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>>> Caused by: java.lang.ClassNotFoundException:
>>>> com.hadoop.compression.lzo.LzoCodec
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>>        at java.lang.Class.forName0(Native Method)
>>>>        at java.lang.Class.forName(Class.java:264)
>>>>        at
>>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>>>
>>>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Li Li <fa...@gmail.com>.
thanks. it's correct now.

On Thu, Feb 13, 2014 at 9:37 AM, Ted Yu <yu...@gmail.com> wrote:
> Please remove LzoCodec from config.
>
> Cheers
>
> On Feb 12, 2014, at 5:12 PM, Li Li <fa...@gmail.com> wrote:
>
>> <property>
>>  <name>io.compression.codecs</name>
>>  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>>  <description>A list of the compression codec classes that can be used
>>               for compression/decompression.</description>
>> </property>
>>
>> <property>
>> <name>io.compression.codec.lzo.class</name>
>> <value>com.hadoop.compression.lzo.LzoCodec</value>
>> </property>
>>
>> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
>>> What's the value for "io.compression.codecs" config parameter ?
>>>
>>> Thanks
>>>
>>>
>>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>>>
>>>> I am runing example of wordcout but encount an exception:
>>>> I googled and know lzo compression's license is incompatible with apache's
>>>> so it's not built in.
>>>> the question is I am using default configuration of hadoop 1.2.1, why
>>>> it need lzo?
>>>> anothe question is, what's Cleaning up the staging area mean?
>>>>
>>>>
>>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>>> /lili/test
>>>>
>>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>>> : 1
>>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>>>
>>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>>> java.lang.IllegalArgumentException: Compression codec
>>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>>        at
>>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>>        at
>>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>        at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>        at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>>        at
>>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>>        at
>>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>>        at
>>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>        at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>>> Caused by: java.lang.ClassNotFoundException:
>>>> com.hadoop.compression.lzo.LzoCodec
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>>        at java.lang.Class.forName0(Native Method)
>>>>        at java.lang.Class.forName(Class.java:264)
>>>>        at
>>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>>>
>>>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Li Li <fa...@gmail.com>.
thanks. it's correct now.

On Thu, Feb 13, 2014 at 9:37 AM, Ted Yu <yu...@gmail.com> wrote:
> Please remove LzoCodec from config.
>
> Cheers
>
> On Feb 12, 2014, at 5:12 PM, Li Li <fa...@gmail.com> wrote:
>
>> <property>
>>  <name>io.compression.codecs</name>
>>  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>>  <description>A list of the compression codec classes that can be used
>>               for compression/decompression.</description>
>> </property>
>>
>> <property>
>> <name>io.compression.codec.lzo.class</name>
>> <value>com.hadoop.compression.lzo.LzoCodec</value>
>> </property>
>>
>> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
>>> What's the value for "io.compression.codecs" config parameter ?
>>>
>>> Thanks
>>>
>>>
>>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>>>
>>>> I am runing example of wordcout but encount an exception:
>>>> I googled and know lzo compression's license is incompatible with apache's
>>>> so it's not built in.
>>>> the question is I am using default configuration of hadoop 1.2.1, why
>>>> it need lzo?
>>>> anothe question is, what's Cleaning up the staging area mean?
>>>>
>>>>
>>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>>> /lili/test
>>>>
>>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>>> : 1
>>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>>>
>>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>>> java.lang.IllegalArgumentException: Compression codec
>>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>>        at
>>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>>        at
>>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>        at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>        at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>>        at
>>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>>        at
>>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>>        at
>>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>        at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>>> Caused by: java.lang.ClassNotFoundException:
>>>> com.hadoop.compression.lzo.LzoCodec
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>>        at java.lang.Class.forName0(Native Method)
>>>>        at java.lang.Class.forName(Class.java:264)
>>>>        at
>>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>>>
>>>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Li Li <fa...@gmail.com>.
thanks. it's correct now.

On Thu, Feb 13, 2014 at 9:37 AM, Ted Yu <yu...@gmail.com> wrote:
> Please remove LzoCodec from config.
>
> Cheers
>
> On Feb 12, 2014, at 5:12 PM, Li Li <fa...@gmail.com> wrote:
>
>> <property>
>>  <name>io.compression.codecs</name>
>>  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>>  <description>A list of the compression codec classes that can be used
>>               for compression/decompression.</description>
>> </property>
>>
>> <property>
>> <name>io.compression.codec.lzo.class</name>
>> <value>com.hadoop.compression.lzo.LzoCodec</value>
>> </property>
>>
>> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
>>> What's the value for "io.compression.codecs" config parameter ?
>>>
>>> Thanks
>>>
>>>
>>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>>>
>>>> I am runing example of wordcout but encount an exception:
>>>> I googled and know lzo compression's license is incompatible with apache's
>>>> so it's not built in.
>>>> the question is I am using default configuration of hadoop 1.2.1, why
>>>> it need lzo?
>>>> anothe question is, what's Cleaning up the staging area mean?
>>>>
>>>>
>>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>>> /lili/test
>>>>
>>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>>> : 1
>>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>>>
>>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>>> java.lang.IllegalArgumentException: Compression codec
>>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>>        at
>>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>>        at
>>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>>        at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>>        at
>>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>        at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>>        at
>>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>>        at
>>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>>        at
>>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>        at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>        at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>>> Caused by: java.lang.ClassNotFoundException:
>>>> com.hadoop.compression.lzo.LzoCodec
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>>        at java.lang.Class.forName0(Native Method)
>>>>        at java.lang.Class.forName(Class.java:264)
>>>>        at
>>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>>        at
>>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>>>
>>>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Ted Yu <yu...@gmail.com>.
Please remove LzoCodec from config. 

Cheers

On Feb 12, 2014, at 5:12 PM, Li Li <fa...@gmail.com> wrote:

> <property>
>  <name>io.compression.codecs</name>
>  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>  <description>A list of the compression codec classes that can be used
>               for compression/decompression.</description>
> </property>
> 
> <property>
> <name>io.compression.codec.lzo.class</name>
> <value>com.hadoop.compression.lzo.LzoCodec</value>
> </property>
> 
> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
>> What's the value for "io.compression.codecs" config parameter ?
>> 
>> Thanks
>> 
>> 
>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>> 
>>> I am runing example of wordcout but encount an exception:
>>> I googled and know lzo compression's license is incompatible with apache's
>>> so it's not built in.
>>> the question is I am using default configuration of hadoop 1.2.1, why
>>> it need lzo?
>>> anothe question is, what's Cleaning up the staging area mean?
>>> 
>>> 
>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>> /lili/test
>>> 
>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>> : 1
>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>> 
>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>> java.lang.IllegalArgumentException: Compression codec
>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>        at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>        at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>> Caused by: java.lang.ClassNotFoundException:
>>> com.hadoop.compression.lzo.LzoCodec
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>        at java.lang.Class.forName0(Native Method)
>>>        at java.lang.Class.forName(Class.java:264)
>>>        at
>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>> 
>> 

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Ted Yu <yu...@gmail.com>.
Please remove LzoCodec from config. 

Cheers

On Feb 12, 2014, at 5:12 PM, Li Li <fa...@gmail.com> wrote:

> <property>
>  <name>io.compression.codecs</name>
>  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>  <description>A list of the compression codec classes that can be used
>               for compression/decompression.</description>
> </property>
> 
> <property>
> <name>io.compression.codec.lzo.class</name>
> <value>com.hadoop.compression.lzo.LzoCodec</value>
> </property>
> 
> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
>> What's the value for "io.compression.codecs" config parameter ?
>> 
>> Thanks
>> 
>> 
>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>> 
>>> I am runing example of wordcout but encount an exception:
>>> I googled and know lzo compression's license is incompatible with apache's
>>> so it's not built in.
>>> the question is I am using default configuration of hadoop 1.2.1, why
>>> it need lzo?
>>> anothe question is, what's Cleaning up the staging area mean?
>>> 
>>> 
>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>> /lili/test
>>> 
>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>> : 1
>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>> 
>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>> java.lang.IllegalArgumentException: Compression codec
>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>        at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>        at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>> Caused by: java.lang.ClassNotFoundException:
>>> com.hadoop.compression.lzo.LzoCodec
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>        at java.lang.Class.forName0(Native Method)
>>>        at java.lang.Class.forName(Class.java:264)
>>>        at
>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>> 
>> 

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Ted Yu <yu...@gmail.com>.
Please remove LzoCodec from config. 

Cheers

On Feb 12, 2014, at 5:12 PM, Li Li <fa...@gmail.com> wrote:

> <property>
>  <name>io.compression.codecs</name>
>  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>  <description>A list of the compression codec classes that can be used
>               for compression/decompression.</description>
> </property>
> 
> <property>
> <name>io.compression.codec.lzo.class</name>
> <value>com.hadoop.compression.lzo.LzoCodec</value>
> </property>
> 
> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
>> What's the value for "io.compression.codecs" config parameter ?
>> 
>> Thanks
>> 
>> 
>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>> 
>>> I am runing example of wordcout but encount an exception:
>>> I googled and know lzo compression's license is incompatible with apache's
>>> so it's not built in.
>>> the question is I am using default configuration of hadoop 1.2.1, why
>>> it need lzo?
>>> anothe question is, what's Cleaning up the staging area mean?
>>> 
>>> 
>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>> /lili/test
>>> 
>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>> : 1
>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>> 
>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>> java.lang.IllegalArgumentException: Compression codec
>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>        at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>        at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>> Caused by: java.lang.ClassNotFoundException:
>>> com.hadoop.compression.lzo.LzoCodec
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>        at java.lang.Class.forName0(Native Method)
>>>        at java.lang.Class.forName(Class.java:264)
>>>        at
>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>> 
>> 

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Ted Yu <yu...@gmail.com>.
Please remove LzoCodec from config. 

Cheers

On Feb 12, 2014, at 5:12 PM, Li Li <fa...@gmail.com> wrote:

> <property>
>  <name>io.compression.codecs</name>
>  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
>  <description>A list of the compression codec classes that can be used
>               for compression/decompression.</description>
> </property>
> 
> <property>
> <name>io.compression.codec.lzo.class</name>
> <value>com.hadoop.compression.lzo.LzoCodec</value>
> </property>
> 
> On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
>> What's the value for "io.compression.codecs" config parameter ?
>> 
>> Thanks
>> 
>> 
>> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>> 
>>> I am runing example of wordcout but encount an exception:
>>> I googled and know lzo compression's license is incompatible with apache's
>>> so it's not built in.
>>> the question is I am using default configuration of hadoop 1.2.1, why
>>> it need lzo?
>>> anothe question is, what's Cleaning up the staging area mean?
>>> 
>>> 
>>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>>> /lili/test
>>> 
>>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>>> : 1
>>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>> 
>>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>>> java.lang.IllegalArgumentException: Compression codec
>>> com.hadoop.compression.lzo.LzoCodec not found.
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>>        at
>>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>>        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at javax.security.auth.Subject.doAs(Subject.java:415)
>>>        at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>>        at
>>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>>        at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>>        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>>        at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>>        at
>>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>>        at
>>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>        at java.lang.reflect.Method.invoke(Method.java:601)
>>>        at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>>> Caused by: java.lang.ClassNotFoundException:
>>> com.hadoop.compression.lzo.LzoCodec
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>>        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>>        at java.security.AccessController.doPrivileged(Native Method)
>>>        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>>        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>>        at java.lang.Class.forName0(Native Method)
>>>        at java.lang.Class.forName(Class.java:264)
>>>        at
>>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>>        at
>>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>> 
>> 

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Li Li <fa...@gmail.com>.
<property>
  <name>io.compression.codecs</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
  <description>A list of the compression codec classes that can be used
               for compression/decompression.</description>
</property>

<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
> What's the value for "io.compression.codecs" config parameter ?
>
> Thanks
>
>
> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>
>> I am runing example of wordcout but encount an exception:
>> I googled and know lzo compression's license is incompatible with apache's
>> so it's not built in.
>> the question is I am using default configuration of hadoop 1.2.1, why
>> it need lzo?
>> anothe question is, what's Cleaning up the staging area mean?
>>
>>
>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>> /lili/test
>>
>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>> : 1
>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>> java.lang.IllegalArgumentException: Compression codec
>> com.hadoop.compression.lzo.LzoCodec not found.
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>> Caused by: java.lang.ClassNotFoundException:
>> com.hadoop.compression.lzo.LzoCodec
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:264)
>>         at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>
>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Li Li <fa...@gmail.com>.
<property>
  <name>io.compression.codecs</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
  <description>A list of the compression codec classes that can be used
               for compression/decompression.</description>
</property>

<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
> What's the value for "io.compression.codecs" config parameter ?
>
> Thanks
>
>
> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>
>> I am runing example of wordcout but encount an exception:
>> I googled and know lzo compression's license is incompatible with apache's
>> so it's not built in.
>> the question is I am using default configuration of hadoop 1.2.1, why
>> it need lzo?
>> anothe question is, what's Cleaning up the staging area mean?
>>
>>
>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>> /lili/test
>>
>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>> : 1
>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>> java.lang.IllegalArgumentException: Compression codec
>> com.hadoop.compression.lzo.LzoCodec not found.
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>> Caused by: java.lang.ClassNotFoundException:
>> com.hadoop.compression.lzo.LzoCodec
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:264)
>>         at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>
>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Zhijie Shen <zs...@hortonworks.com>.
For the codecs, you can choose
among org.apache.hadoop.io.compress.*Codec. LzoCodec has been moved out of
Hadoop (see HADOOP-4874).

- Zhijie


On Wed, Feb 12, 2014 at 10:54 AM, Ted Yu <yu...@gmail.com> wrote:

> What's the value for "io.compression.codecs" config parameter ?
>
> Thanks
>
>
> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>
>> I am runing example of wordcout but encount an exception:
>> I googled and know lzo compression's license is incompatible with apache's
>> so it's not built in.
>> the question is I am using default configuration of hadoop 1.2.1, why
>> it need lzo?
>> anothe question is, what's Cleaning up the staging area mean?
>>
>>
>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>> /lili/test
>>
>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>> hdfs://
>> 172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>> java.lang.IllegalArgumentException: Compression codec
>> com.hadoop.compression.lzo.LzoCodec not found.
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>> Caused by: java.lang.ClassNotFoundException:
>> com.hadoop.compression.lzo.LzoCodec
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:264)
>>         at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>>
>
>


-- 
Zhijie Shen
Hortonworks Inc.
http://hortonworks.com/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Zhijie Shen <zs...@hortonworks.com>.
For the codecs, you can choose
among org.apache.hadoop.io.compress.*Codec. LzoCodec has been moved out of
Hadoop (see HADOOP-4874).

- Zhijie


On Wed, Feb 12, 2014 at 10:54 AM, Ted Yu <yu...@gmail.com> wrote:

> What's the value for "io.compression.codecs" config parameter ?
>
> Thanks
>
>
> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>
>> I am runing example of wordcout but encount an exception:
>> I googled and know lzo compression's license is incompatible with apache's
>> so it's not built in.
>> the question is I am using default configuration of hadoop 1.2.1, why
>> it need lzo?
>> anothe question is, what's Cleaning up the staging area mean?
>>
>>
>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>> /lili/test
>>
>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>> hdfs://
>> 172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>> java.lang.IllegalArgumentException: Compression codec
>> com.hadoop.compression.lzo.LzoCodec not found.
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>> Caused by: java.lang.ClassNotFoundException:
>> com.hadoop.compression.lzo.LzoCodec
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:264)
>>         at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>>
>
>


-- 
Zhijie Shen
Hortonworks Inc.
http://hortonworks.com/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Li Li <fa...@gmail.com>.
<property>
  <name>io.compression.codecs</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
  <description>A list of the compression codec classes that can be used
               for compression/decompression.</description>
</property>

<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
> What's the value for "io.compression.codecs" config parameter ?
>
> Thanks
>
>
> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>
>> I am runing example of wordcout but encount an exception:
>> I googled and know lzo compression's license is incompatible with apache's
>> so it's not built in.
>> the question is I am using default configuration of hadoop 1.2.1, why
>> it need lzo?
>> anothe question is, what's Cleaning up the staging area mean?
>>
>>
>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>> /lili/test
>>
>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>> : 1
>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>> java.lang.IllegalArgumentException: Compression codec
>> com.hadoop.compression.lzo.LzoCodec not found.
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>> Caused by: java.lang.ClassNotFoundException:
>> com.hadoop.compression.lzo.LzoCodec
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:264)
>>         at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>
>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Zhijie Shen <zs...@hortonworks.com>.
For the codecs, you can choose
among org.apache.hadoop.io.compress.*Codec. LzoCodec has been moved out of
Hadoop (see HADOOP-4874).

- Zhijie


On Wed, Feb 12, 2014 at 10:54 AM, Ted Yu <yu...@gmail.com> wrote:

> What's the value for "io.compression.codecs" config parameter ?
>
> Thanks
>
>
> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>
>> I am runing example of wordcout but encount an exception:
>> I googled and know lzo compression's license is incompatible with apache's
>> so it's not built in.
>> the question is I am using default configuration of hadoop 1.2.1, why
>> it need lzo?
>> anothe question is, what's Cleaning up the staging area mean?
>>
>>
>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>> /lili/test
>>
>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>> hdfs://
>> 172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>> java.lang.IllegalArgumentException: Compression codec
>> com.hadoop.compression.lzo.LzoCodec not found.
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>> Caused by: java.lang.ClassNotFoundException:
>> com.hadoop.compression.lzo.LzoCodec
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:264)
>>         at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>>
>
>


-- 
Zhijie Shen
Hortonworks Inc.
http://hortonworks.com/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Li Li <fa...@gmail.com>.
<property>
  <name>io.compression.codecs</name>
  <value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec</value>
  <description>A list of the compression codec classes that can be used
               for compression/decompression.</description>
</property>

<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>

On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu <yu...@gmail.com> wrote:
> What's the value for "io.compression.codecs" config parameter ?
>
> Thanks
>
>
> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>>
>> I am runing example of wordcout but encount an exception:
>> I googled and know lzo compression's license is incompatible with apache's
>> so it's not built in.
>> the question is I am using default configuration of hadoop 1.2.1, why
>> it need lzo?
>> anothe question is, what's Cleaning up the staging area mean?
>>
>>
>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>> /lili/test
>>
>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
>> : 1
>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>>
>> hdfs://172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>> java.lang.IllegalArgumentException: Compression codec
>> com.hadoop.compression.lzo.LzoCodec not found.
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>> Caused by: java.lang.ClassNotFoundException:
>> com.hadoop.compression.lzo.LzoCodec
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:264)
>>         at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>
>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Zhijie Shen <zs...@hortonworks.com>.
For the codecs, you can choose
among org.apache.hadoop.io.compress.*Codec. LzoCodec has been moved out of
Hadoop (see HADOOP-4874).

- Zhijie


On Wed, Feb 12, 2014 at 10:54 AM, Ted Yu <yu...@gmail.com> wrote:

> What's the value for "io.compression.codecs" config parameter ?
>
> Thanks
>
>
> On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:
>
>> I am runing example of wordcout but encount an exception:
>> I googled and know lzo compression's license is incompatible with apache's
>> so it's not built in.
>> the question is I am using default configuration of hadoop 1.2.1, why
>> it need lzo?
>> anothe question is, what's Cleaning up the staging area mean?
>>
>>
>> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
>> /lili/test
>>
>> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to
>> process : 1
>> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
>> hdfs://
>> 172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
>> java.lang.IllegalArgumentException: Compression codec
>> com.hadoop.compression.lzo.LzoCodec not found.
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>>         at
>> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>>         at
>> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>>         at
>> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>>         at
>> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>>         at
>> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>>         at
>> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>> Caused by: java.lang.ClassNotFoundException:
>> com.hadoop.compression.lzo.LzoCodec
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>>         at java.lang.Class.forName0(Native Method)
>>         at java.lang.Class.forName(Class.java:264)
>>         at
>> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>>         at
>> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>>
>
>


-- 
Zhijie Shen
Hortonworks Inc.
http://hortonworks.com/

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Ted Yu <yu...@gmail.com>.
What's the value for "io.compression.codecs" config parameter ?

Thanks


On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:

> I am runing example of wordcout but encount an exception:
> I googled and know lzo compression's license is incompatible with apache's
> so it's not built in.
> the question is I am using default configuration of hadoop 1.2.1, why
> it need lzo?
> anothe question is, what's Cleaning up the staging area mean?
>
>
> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
> /lili/test
>
> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
> : 1
> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://
> 172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
> java.lang.IllegalArgumentException: Compression codec
> com.hadoop.compression.lzo.LzoCodec not found.
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>         at
> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>         at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>         at
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>         at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>         at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>         at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: java.lang.ClassNotFoundException:
> com.hadoop.compression.lzo.LzoCodec
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:264)
>         at
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Ted Yu <yu...@gmail.com>.
What's the value for "io.compression.codecs" config parameter ?

Thanks


On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:

> I am runing example of wordcout but encount an exception:
> I googled and know lzo compression's license is incompatible with apache's
> so it's not built in.
> the question is I am using default configuration of hadoop 1.2.1, why
> it need lzo?
> anothe question is, what's Cleaning up the staging area mean?
>
>
> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
> /lili/test
>
> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
> : 1
> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://
> 172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
> java.lang.IllegalArgumentException: Compression codec
> com.hadoop.compression.lzo.LzoCodec not found.
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>         at
> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>         at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>         at
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>         at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>         at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>         at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: java.lang.ClassNotFoundException:
> com.hadoop.compression.lzo.LzoCodec
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:264)
>         at
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Ted Yu <yu...@gmail.com>.
What's the value for "io.compression.codecs" config parameter ?

Thanks


On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:

> I am runing example of wordcout but encount an exception:
> I googled and know lzo compression's license is incompatible with apache's
> so it's not built in.
> the question is I am using default configuration of hadoop 1.2.1, why
> it need lzo?
> anothe question is, what's Cleaning up the staging area mean?
>
>
> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
> /lili/test
>
> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
> : 1
> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://
> 172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
> java.lang.IllegalArgumentException: Compression codec
> com.hadoop.compression.lzo.LzoCodec not found.
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>         at
> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>         at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>         at
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>         at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>         at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>         at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: java.lang.ClassNotFoundException:
> com.hadoop.compression.lzo.LzoCodec
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:264)
>         at
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>

Re: Compression codec com.hadoop.compression.lzo.LzoCodec not found

Posted by Ted Yu <yu...@gmail.com>.
What's the value for "io.compression.codecs" config parameter ?

Thanks


On Tue, Feb 11, 2014 at 10:11 PM, Li Li <fa...@gmail.com> wrote:

> I am runing example of wordcout but encount an exception:
> I googled and know lzo compression's license is incompatible with apache's
> so it's not built in.
> the question is I am using default configuration of hadoop 1.2.1, why
> it need lzo?
> anothe question is, what's Cleaning up the staging area mean?
>
>
> ./bin/hadoop jar hadoop-examples-1.2.1.jar wordcount /lili/data.txt
> /lili/test
>
> 14/02/12 14:06:10 INFO input.FileInputFormat: Total input paths to process
> : 1
> 14/02/12 14:06:10 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://
> 172.19.34.24:8020/home/hadoop/dfsdir/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201401080916_0216
> java.lang.IllegalArgumentException: Compression codec
> com.hadoop.compression.lzo.LzoCodec not found.
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:116)
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:156)
>         at
> org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:47)
>         at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:258)
>         at
> org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054)
>         at
> org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071)
>         at
> org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983)
>         at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>         at
> org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936)
>         at org.apache.hadoop.mapreduce.Job.submit(Job.java:550)
>         at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580)
>         at org.apache.hadoop.examples.WordCount.main(WordCount.java:82)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at
> org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
>         at
> org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Caused by: java.lang.ClassNotFoundException:
> com.hadoop.compression.lzo.LzoCodec
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:264)
>         at
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
>         at
> org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:109)
>