You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Shara Shi <sh...@dhgate.com> on 2012/06/21 09:31:05 UTC

flume agent start failure in server without hadoop

Hi all , 

 

When I execute ./flume-ng agent -name agent1 -f ../conf/flume.conf

 in a web server , I got following information. 

 

+ exec /usr/local/jdk/jdk1.6.0_26/bin/java -Xmx20m -cp
'/tmp/flume-1.2.0-incubating-SNAPSHOT/lib/*' -Djava.library.path=
org.apache.flume.node.Application -name agent1 -f ../conf/flume.conf

log4j:WARN No appenders could be found for logger
(org.apache.flume.lifecycle.LifecycleSupervisor).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.

 

 

I check the script flume-ng , found add_hadoop_paths , add_HBASE_paths
function. 

I think those two function are probably the reason why agent dose not start
successfully.

 

I confused that should I install hadoop in every web server? It is not
reasonable.

How can I start flume agent successfully in a server without hadoop ?

My flume.conf is listed below, which does not access HDFS. 

 

 

 # Define a memory channel called ch1 on agent1

agent1.channels.ch1.type = memory

 

# Define an Avro source called avro-source1 on agent1 and tell it

# to bind to 0.0.0.0:41414. Connect it to channel ch1.

agent1.sources.avro-source1.channels = ch1

agent1.sources.avro-source1.type = avro

agent1.sources.avro-source1.bind = 0.0.0.0

agent1.sources.avro-source1.port = 41414

 

 

# Define a tail source

agent1.sources.tail1.channels=ch1

agent1.sources.tail1.type=exec

agent1.sources.tail1.command=tail -n +0 -F /tmp/test2.log

 

 

# Define a logger sink that simply logs all events it receives

# and connect it to the other end of the same channel.

agent1.sinks.log-sink1.channel = ch1

agent1.sinks.log-sink1.type = logger

 

 

# Finally, now that we've defined all of our components, tell

# agent1 which ones we want to activate.

agent1.channels = ch1

#agent1.sources = avro-source1

agent1.sinks = log-sink1

agent1.sources = tail1

 

Regards

Ruihong

 


Re: flume agent start failure in server without hadoop

Posted by shekhar sharma <sh...@gmail.com>.
Hi,
It is not necessary to have hadoop and Hbase on ur system installed, if you
dont want to use it.
by your configuration, you are using two source attached to same channel
and going to same sink.

Please correct me if i am wrong, for every source there should be one
channel but can have multiple sink.

for testing keep exec and logger sink...

Moreover in the command which you have mentioned, are u storing the agent
information in flume.conf file.

Try out simpler approach:
Define your agent (source, channel and sink) in agent1.properties file.
then run your command like this:

./flume-ng agent --conf ../conf/ -f ../conf/agent1.properties -n agent1

Regards,
Som Shekhar

On Thu, Jun 21, 2012 at 1:01 PM, Shara Shi <sh...@dhgate.com> wrote:

>  Hi all , ****
>
> ** **
>
> When I execute ./flume-ng agent -name agent1 -f ../conf/flume.conf****
>
>  in a web server , I got following information. ****
>
> ** **
>
> + exec /usr/local/jdk/jdk1.6.0_26/bin/java -Xmx20m -cp
> '/tmp/flume-1.2.0-incubating-SNAPSHOT/lib/*' -Djava.library.path=
> org.apache.flume.node.Application -name agent1 -f ../conf/flume.conf****
>
> log4j:WARN No appenders could be found for logger
> (org.apache.flume.lifecycle.LifecycleSupervisor).****
>
> log4j:WARN Please initialize the log4j system properly.****
>
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.****
>
> ** **
>
> ** **
>
> I check the script flume-ng , found add_hadoop_paths , add_HBASE_paths
> function. ****
>
> I think those two function are probably the reason why agent dose not
> start successfully.****
>
> ** **
>
> I confused that should I install hadoop in every web server? It is not
> reasonable…****
>
> How can I start flume agent successfully in a server without hadoop ?****
>
> My flume.conf is listed below, which does not access HDFS. ****
>
> ** **
>
> ** **
>
>  # Define a memory channel called ch1 on agent1****
>
> agent1.channels.ch1.type = memory****
>
> ** **
>
> # Define an Avro source called avro-source1 on agent1 and tell it****
>
> # to bind to 0.0.0.0:41414. Connect it to channel ch1.****
>
> agent1.sources.avro-source1.channels = ch1****
>
> agent1.sources.avro-source1.type = avro****
>
> agent1.sources.avro-source1.bind = 0.0.0.0****
>
> agent1.sources.avro-source1.port = 41414****
>
> ** **
>
> ** **
>
> # Define a tail source****
>
> agent1.sources.tail1.channels=ch1****
>
> agent1.sources.tail1.type=exec****
>
> agent1.sources.tail1.command=tail -n +0 -F /tmp/test2.log****
>
> ** **
>
> ** **
>
> # Define a logger sink that simply logs all events it receives****
>
> # and connect it to the other end of the same channel.****
>
> agent1.sinks.log-sink1.channel = ch1****
>
> agent1.sinks.log-sink1.type = logger****
>
> ** **
>
> ** **
>
> # Finally, now that we've defined all of our components, tell****
>
> # agent1 which ones we want to activate.****
>
> agent1.channels = ch1****
>
> #agent1.sources = avro-source1****
>
> agent1.sinks = log-sink1****
>
> agent1.sources = tail1****
>
> ** **
>
> Regards****
>
> Ruihong****
>
> ** **
>