You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flume.apache.org by "jiraposter@reviews.apache.org (JIRA)" <ji...@apache.org> on 2011/09/19 02:04:10 UTC

[jira] [Commented] (FLUME-580) Flume needs to be consistent with autodiscovery of Hadoop compression codecs

    [ https://issues.apache.org/jira/browse/FLUME-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13107564#comment-13107564 ] 

jiraposter@reviews.apache.org commented on FLUME-580:
-----------------------------------------------------


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1949/
-----------------------------------------------------------

(Updated 2011-09-19 00:04:07.721885)


Review request for Flume, Arvind Prabhakar and Roman Shaposhnik.


Changes
-------

added link to jira


Summary
-------

commit b34670902d4c667f98ffe7e619c436148c89e000
Author: Jonathan Hsieh <jo...@cloudera.com>
Date:   Sun Sep 18 11:05:42 2011 -0700

    FLUME-580: Flume needs to be consistent with autodiscovery of Hadoop compres
    
    - Adds native paths for hadoop native codec support


This addresses bug FLUME-580.
    https://issues.apache.org/jira/browse/FLUME-580


Diffs
-----

  bin/flume 183cf70 

Diff: https://reviews.apache.org/r/1949/diff


Testing
-------

Built and ran 

flume sink 'collectorSource("file:///tmp/snappy", "foo", 10000, seqfile("snappy")' 

typed some stuff at the console, then ^D.

/tmp/snappy dir had files that were snappy codec compressed.


Thanks,

jmhsieh



> Flume needs to be consistent with autodiscovery of Hadoop compression codecs
> ----------------------------------------------------------------------------
>
>                 Key: FLUME-580
>                 URL: https://issues.apache.org/jira/browse/FLUME-580
>             Project: Flume
>          Issue Type: Bug
>            Reporter: Disabled imported user
>            Assignee: Jonathan Hsieh
>
> Currently, if flume notices that LZO is configured in core-site.xml it tries to load it. However, since it doesn't follow the same route that Hadoop does for locating the JAR and corresponding so files it fails to do so. Flume has to leverage the same logic that hadoop has embedded in bin/hadoop:
> .....
> # setup 'java.library.path' for native-hadoop code if necessary
> if [ -d "${HADOOP_HOME}/build/native" -o -d "${HADOOP_HOME}/lib/native" -o -d "${HADOOP_HOME}/sbin" ]; then
>   JAVA_PLATFORM=`CLASSPATH=${CLASSPATH} ${JAVA} -Xmx32m ${HADOOP_JAVA_PLATFORM_OPTS} org.apache.hadoop.util.PlatformName | sed -e "s/ /_/g"`
> ...
> so that it can transparently find LZO objects that it tries to autodiscover.
> For the time being, however, the workaround seems to be to tell flume explicitly where to look for LZO via 
> adding the following to  /usr/lib/flume/bin/flume-env.sh:
> export JAVA_LIBRARY_PATH=/usr/lib/hadoop-0.20/lib/native/Linux-amd64-64/
> export FLUME_CLASSPATH=/usr/lib/hadoop-0.20/lib/hadoop-lzo-20101122174751.20101122171345.552b3f9.jar

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira