You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by prabhu k <pr...@gmail.com> on 2012/09/17 12:17:18 UTC

tail source exec unable to HDFS sink.

Hi Users,

I have followed the below link for sample text file to HDFS sink using tail
source.

http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more

I have executed flume-ng like as below command. it seems got stuck. and
attached flume.conf file.

#bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf


flume.conf
==========
agent1.sources = tail
agent1.channels = MemoryChannel-2
agent1.sinks = HDFS

agent1.sources.tail.type = exec
agent1.sources.tail.command = tail -F
/usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
agent1.sources.tail.channels = MemoryChannel-2

agent1.sources.tail.interceptors = hostint
agent1.sources.tail.interceptors.hostint.type =
org.apache.flume.interceptor.HostInterceptor$Builder
agent1.sources.tail.interceptors.hostint.preserveExisting = true
agent1.sources.tail.interceptors.hostint.useIP = false

agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
agent1.sinks.HDFS.hdfs.type = hdfs
agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user

agent1.sinks.HDFS.hdfs.fileType = dataStream
agent1.sinks.HDFS.hdfs.writeFormat = text
agent1.channels.MemoryChannel-2.type = memory



flume.log
==========
12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
supervisor 1
12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node manager
starting
12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
Configuration provider starting
12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
supervisor 10
12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
Reloading configuration file:conf/flume.conf
12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
agent1
12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty for:
HDFS.Removed.
12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
configuration contains configuration  for agents: [agent1]
12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
Creating channels
12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
created channel MemoryChannel-2
12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting new
configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281} }
12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
Channel MemoryChannel-2
12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
Source tail
12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
command:tail -F
/usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt

Please suggest and help me on this issue.

Re: tail source exec unable to HDFS sink.

Posted by prabhu k <pr...@gmail.com>.
yes, i could not able to see data in HDFS, and i have appended new data,but
still I'm unable to see the data in HDFS /user directory.

Thanks,
Prabhu



On Thu, Sep 20, 2012 at 8:18 PM, Brock Noland <br...@cloudera.com> wrote:

> It actually looks like it's working. Are you sure no data is showing
> up in hdfs and that new data is being appended to the file you are
> tailing?
>
> On Thu, Sep 20, 2012 at 3:48 AM, prabhu k <pr...@gmail.com> wrote:
> > Hi Brock,
> >
> > I have changed, as per your suggestion,but still the issue same, script
> > seems stuck, pasted flume.log file.
> >
> > flume.log
> > ==========
> > 12/09/20 14:07:40 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> > supervisor 1
> > 12/09/20 14:07:40 INFO node.FlumeNode: Flume node starting - agent1
> > 12/09/20 14:07:40 INFO nodemanager.DefaultLogicalNodeManager: Node
> manager
> > starting
> > 12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
> > Configuration provider starting
> > 12/09/20 14:07:40 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> > supervisor 10
> > 12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
> > Reloading configuration file:conf/flume.conf
> > 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
> > agent1
> > 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Post-validation flume
> > configuration contains configuration  for agents: [agent1]
> > 12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
> > Creating channels
> > 12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
> > created channel MemoryChannel-2
> > 12/09/20 14:07:40 INFO sink.DefaultSinkFactory: Creating instance of sink
> > HDFS typehdfs
> >
> >
> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com>
> wrote:
> >>
> >> Hi,
> >>
> >> I believe, this line:
> >> agent1.sinks.HDFS.hdfs.type = hdfs
> >>
> >> should be:
> >> agent1.sinks.HDFS.type = hdfs
> >>
> >> Brock
> >>
> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com>
> wrote:
> >> > Hi Users,
> >> >
> >> > I have followed the below link for sample text file to HDFS sink using
> >> > tail
> >> > source.
> >> >
> >> >
> >> >
> http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
> >> >
> >> > I have executed flume-ng like as below command. it seems got stuck.
> and
> >> > attached flume.conf file.
> >> >
> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
> >> >
> >> >
> >> > flume.conf
> >> > ==========
> >> > agent1.sources = tail
> >> > agent1.channels = MemoryChannel-2
> >> > agent1.sinks = HDFS
> >> >
> >> > agent1.sources.tail.type = exec
> >> > agent1.sources.tail.command = tail -F
> >> >
> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> > agent1.sources.tail.channels = MemoryChannel-2
> >> >
> >> > agent1.sources.tail.interceptors = hostint
> >> > agent1.sources.tail.interceptors.hostint.type =
> >> > org.apache.flume.interceptor.HostInterceptor$Builder
> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
> >> > agent1.sources.tail.interceptors.hostint.useIP = false
> >> >
> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> >> > agent1.sinks.HDFS.hdfs.type = hdfs
> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
> >> >
> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
> >> > agent1.channels.MemoryChannel-2.type = memory
> >> >
> >> >
> >> >
> >> > flume.log
> >> > ==========
> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> lifecycle
> >> > supervisor 1
> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
> >> > manager
> >> > starting
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Configuration provider starting
> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> lifecycle
> >> > supervisor 10
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Reloading configuration file:conf/flume.conf
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS
> Agent:
> >> > agent1
> >> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty
> for:
> >> > HDFS.Removed.
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
> >> > configuration contains configuration  for agents: [agent1]
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Creating channels
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > created channel MemoryChannel-2
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > new
> >> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> >> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> >> >
> >> >
> channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281}
> }
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > Channel MemoryChannel-2
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > Source tail
> >> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> >> > command:tail -F
> >> >
> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> >
> >> > Please suggest and help me on this issue.
> >>
> >>
> >>
> >> --
> >> Apache MRUnit - Unit testing MapReduce -
> >> http://incubator.apache.org/mrunit/
> >
> >
>
>
>
> --
> Apache MRUnit - Unit testing MapReduce -
> http://incubator.apache.org/mrunit/
>

Re: tail source exec unable to HDFS sink.

Posted by Brock Noland <br...@cloudera.com>.
It actually looks like it's working. Are you sure no data is showing
up in hdfs and that new data is being appended to the file you are
tailing?

On Thu, Sep 20, 2012 at 3:48 AM, prabhu k <pr...@gmail.com> wrote:
> Hi Brock,
>
> I have changed, as per your suggestion,but still the issue same, script
> seems stuck, pasted flume.log file.
>
> flume.log
> ==========
> 12/09/20 14:07:40 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> supervisor 1
> 12/09/20 14:07:40 INFO node.FlumeNode: Flume node starting - agent1
> 12/09/20 14:07:40 INFO nodemanager.DefaultLogicalNodeManager: Node manager
> starting
> 12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
> Configuration provider starting
> 12/09/20 14:07:40 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> supervisor 10
> 12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
> Reloading configuration file:conf/flume.conf
> 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
> agent1
> 12/09/20 14:07:40 INFO conf.FlumeConfiguration: Post-validation flume
> configuration contains configuration  for agents: [agent1]
> 12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
> Creating channels
> 12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
> created channel MemoryChannel-2
> 12/09/20 14:07:40 INFO sink.DefaultSinkFactory: Creating instance of sink
> HDFS typehdfs
>
>
> On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com> wrote:
>>
>> Hi,
>>
>> I believe, this line:
>> agent1.sinks.HDFS.hdfs.type = hdfs
>>
>> should be:
>> agent1.sinks.HDFS.type = hdfs
>>
>> Brock
>>
>> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com> wrote:
>> > Hi Users,
>> >
>> > I have followed the below link for sample text file to HDFS sink using
>> > tail
>> > source.
>> >
>> >
>> > http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
>> >
>> > I have executed flume-ng like as below command. it seems got stuck. and
>> > attached flume.conf file.
>> >
>> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
>> >
>> >
>> > flume.conf
>> > ==========
>> > agent1.sources = tail
>> > agent1.channels = MemoryChannel-2
>> > agent1.sinks = HDFS
>> >
>> > agent1.sources.tail.type = exec
>> > agent1.sources.tail.command = tail -F
>> >
>> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> > agent1.sources.tail.channels = MemoryChannel-2
>> >
>> > agent1.sources.tail.interceptors = hostint
>> > agent1.sources.tail.interceptors.hostint.type =
>> > org.apache.flume.interceptor.HostInterceptor$Builder
>> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
>> > agent1.sources.tail.interceptors.hostint.useIP = false
>> >
>> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
>> > agent1.sinks.HDFS.hdfs.type = hdfs
>> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
>> >
>> > agent1.sinks.HDFS.hdfs.fileType = dataStream
>> > agent1.sinks.HDFS.hdfs.writeFormat = text
>> > agent1.channels.MemoryChannel-2.type = memory
>> >
>> >
>> >
>> > flume.log
>> > ==========
>> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
>> > supervisor 1
>> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
>> > manager
>> > starting
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > Configuration provider starting
>> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
>> > supervisor 10
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > Reloading configuration file:conf/flume.conf
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
>> > agent1
>> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty for:
>> > HDFS.Removed.
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
>> > configuration contains configuration  for agents: [agent1]
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > Creating channels
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > created channel MemoryChannel-2
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > new
>> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
>> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
>> >
>> > channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281} }
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > Channel MemoryChannel-2
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > Source tail
>> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
>> > command:tail -F
>> >
>> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> >
>> > Please suggest and help me on this issue.
>>
>>
>>
>> --
>> Apache MRUnit - Unit testing MapReduce -
>> http://incubator.apache.org/mrunit/
>
>



-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Re: tail source exec unable to HDFS sink.

Posted by prabhu k <pr...@gmail.com>.
Hi Brock,

I have changed, as per your suggestion,but still the issue same, script
seems stuck, pasted flume.log file.

flume.log
==========
12/09/20 14:07:40 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
supervisor 1
12/09/20 14:07:40 INFO node.FlumeNode: Flume node starting - agent1
12/09/20 14:07:40 INFO nodemanager.DefaultLogicalNodeManager: Node manager
starting
12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
Configuration provider starting
12/09/20 14:07:40 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
supervisor 10
12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
Reloading configuration file:conf/flume.conf
12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/20 14:07:40 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/20 14:07:40 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
agent1
12/09/20 14:07:40 INFO conf.FlumeConfiguration: Post-validation flume
configuration contains configuration  for agents: [agent1]
12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
Creating channels
12/09/20 14:07:40 INFO properties.PropertiesFileConfigurationProvider:
created channel MemoryChannel-2
12/09/20 14:07:40 INFO sink.DefaultSinkFactory: Creating instance of sink
HDFS typehdfs


On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com> wrote:

> Hi,
>
> I believe, this line:
> agent1.sinks.HDFS.hdfs.type = hdfs
>
> should be:
> agent1.sinks.HDFS.type = hdfs
>
> Brock
>
> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com> wrote:
> > Hi Users,
> >
> > I have followed the below link for sample text file to HDFS sink using
> tail
> > source.
> >
> >
> http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
> >
> > I have executed flume-ng like as below command. it seems got stuck. and
> > attached flume.conf file.
> >
> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
> >
> >
> > flume.conf
> > ==========
> > agent1.sources = tail
> > agent1.channels = MemoryChannel-2
> > agent1.sinks = HDFS
> >
> > agent1.sources.tail.type = exec
> > agent1.sources.tail.command = tail -F
> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> > agent1.sources.tail.channels = MemoryChannel-2
> >
> > agent1.sources.tail.interceptors = hostint
> > agent1.sources.tail.interceptors.hostint.type =
> > org.apache.flume.interceptor.HostInterceptor$Builder
> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
> > agent1.sources.tail.interceptors.hostint.useIP = false
> >
> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> > agent1.sinks.HDFS.hdfs.type = hdfs
> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
> >
> > agent1.sinks.HDFS.hdfs.fileType = dataStream
> > agent1.sinks.HDFS.hdfs.writeFormat = text
> > agent1.channels.MemoryChannel-2.type = memory
> >
> >
> >
> > flume.log
> > ==========
> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> > supervisor 1
> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
> manager
> > starting
> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> > Configuration provider starting
> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> > supervisor 10
> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> > Reloading configuration file:conf/flume.conf
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
> > agent1
> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty for:
> > HDFS.Removed.
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
> > configuration contains configuration  for agents: [agent1]
> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> > Creating channels
> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> > created channel MemoryChannel-2
> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> new
> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> > channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281}
> }
> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> > Channel MemoryChannel-2
> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> > Source tail
> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> > command:tail -F
> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >
> > Please suggest and help me on this issue.
>
>
>
> --
> Apache MRUnit - Unit testing MapReduce -
> http://incubator.apache.org/mrunit/
>

Re: tail source exec unable to HDFS sink.

Posted by Brock Noland <br...@cloudera.com>.
This line
agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2

should be
agent1.sinks.HDFS.channel = MemoryChannel-2

Brock

On Tue, Sep 18, 2012 at 6:27 AM, prabhu k <pr...@gmail.com> wrote:
> Hi,
>
> Please find the following flume.conf & flume.log files.
>
> I have marked in red colour below is that having any issue?
>
> flume.conf
> =============
>
> agent1.sources = tail
> agent1.channels = MemoryChannel-2
> agent1.sinks = HDFS
> agent1.sources.tail.type = exec
> agent1.sources.tail.command = tail
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> agent1.sources.tail.channels = MemoryChannel-2
> agent1.sources.tail.interceptors = hostint
> agent1.sources.tail.interceptors.hostint.type =
> org.apache.flume.interceptor.HostInterceptor$Builder
> agent1.sources.tail.interceptors.hostint.preserveExisting = true
> agent1.sources.tail.interceptors.hostint.useIP = false
> agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> agent1.sinks.HDFS.type = hdfs
> agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user/
> agent1.sinks.HDFS.hdfs.fileType = dataStream
> agent1.sinks.HDFS.hdfs.writeFormat = text
> agent1.channels.MemoryChannel-2.type = memory
>
> flume.log
> ================
> 12/09/18 16:52:16 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> supervisor 1
> 12/09/18 16:52:16 INFO node.FlumeNode: Flume node starting - agent1
> 12/09/18 16:52:16 INFO nodemanager.DefaultLogicalNodeManager: Node manager
> starting
> 12/09/18 16:52:16 INFO properties.PropertiesFileConfigurationProvider:
> Configuration provider starting
> 12/09/18 16:52:16 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> supervisor 10
> 12/09/18 16:52:16 INFO properties.PropertiesFileConfigurationProvider:
> Reloading configuration file:conf/flume.conf
> 12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/18 16:52:16 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
> agent1
> 12/09/18 16:52:16 WARN conf.FlumeConfiguration: Configuration empty for:
> HDFS.Removed.
> 12/09/18 16:52:16 INFO conf.FlumeConfiguration: Post-validation flume
> configuration contains configuration  for agents: [agent1]
> 12/09/18 16:52:16 INFO properties.PropertiesFileConfigurationProvider:
> Creating channels
> 12/09/18 16:52:16 INFO properties.PropertiesFileConfigurationProvider:
> created channel MemoryChannel-2
> 12/09/18 16:52:16 INFO nodemanager.DefaultLogicalNodeManager: Starting new
> configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource@a1d1f4 }} sinkRunners:{}
> channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@1df280b} }
> 12/09/18 16:52:16 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel MemoryChannel-2
> 12/09/18 16:52:16 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Source tail
> 12/09/18 16:52:16 INFO source.ExecSource: Exec source starting with
> command:tail
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>
> -Prabhu
>
> On Tue, Sep 18, 2012 at 4:40 PM, Brock Noland <br...@cloudera.com> wrote:
>>
>> Yes they should work together. Please send the updated conf and log file.
>>
>> --
>> Brock Noland
>> Sent with Sparrow
>>
>> On Tuesday, September 18, 2012 at 5:49 AM, prabhu k wrote:
>>
>> I have tried both ways, but still not working....
>> can you please confirm flume 1.2.0 support hadoop 1.0.3 version?
>>
>> Thanks,
>> Prabhu.
>> On Tue, Sep 18, 2012 at 3:32 PM, Nitin Pawar <ni...@gmail.com>
>> wrote:
>>
>> can you write something in file continuously after you start flume-ng
>>
>> if you do tail -f it will start getting only new entries
>> or you can just change the command  in the config file from tail -f to
>> tail so each time it bring default last 10 lines from the the file
>>
>> ~nitin
>>
>> On Tue, Sep 18, 2012 at 2:51 PM, prabhu k <pr...@gmail.com> wrote:
>> > Hi Nitin,
>> >
>> > While executing flume-ng, i have updated the flume_test.txt file,still
>> > unable to do HDFS sink.
>> >
>> > Thanks,
>> > Prabhu.
>> >
>> > On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar <ni...@gmail.com>
>> > wrote:
>> >>
>> >> Hi Prabhu,
>> >>
>> >> are you sure there is continuous text being written to your file
>> >> flume_test.txt.
>> >>
>> >> if nothing is written to that file, flume will not write anything into
>> >> hdfs.
>> >>
>> >> On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <pr...@gmail.com>
>> >> wrote:
>> >> > Hi Brock,
>> >> >
>> >> > Thanks for the reply.
>> >> >
>> >> > As per your suggestion, i have modified,but still same issue.
>> >> >
>> >> > My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us
>> >> > know is
>> >> > there any incompatible version?
>> >> >
>> >> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com>
>> >> > wrote:
>> >> >>
>> >> >> Hi,
>> >> >>
>> >> >> I believe, this line:
>> >> >> agent1.sinks.HDFS.hdfs.type = hdfs
>> >> >>
>> >> >> should be:
>> >> >> agent1.sinks.HDFS.type = hdfs
>> >> >>
>> >> >> Brock
>> >> >>
>> >> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com>
>> >> >> wrote:
>> >> >> > Hi Users,
>> >> >> >
>> >> >> > I have followed the below link for sample text file to HDFS sink
>> >> >> > using
>> >> >> > tail
>> >> >> > source.
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
>> >> >> >
>> >> >> > I have executed flume-ng like as below command. it seems got
>> >> >> > stuck.
>> >> >> > and
>> >> >> > attached flume.conf file.
>> >> >> >
>> >> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
>> >> >> >
>> >> >> >
>> >> >> > flume.conf
>> >> >> > ==========
>> >> >> > agent1.sources = tail
>> >> >> > agent1.channels = MemoryChannel-2
>> >> >> > agent1.sinks = HDFS
>> >> >> >
>> >> >> > agent1.sources.tail.type = exec
>> >> >> > agent1.sources.tail.command = tail -F
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> >> >> > agent1.sources.tail.channels = MemoryChannel-2
>> >> >> >
>> >> >> > agent1.sources.tail.interceptors = hostint
>> >> >> > agent1.sources.tail.interceptors.hostint.type =
>> >> >> > org.apache.flume.interceptor.HostInterceptor$Builder
>> >> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
>> >> >> > agent1.sources.tail.interceptors.hostint.useIP = false
>> >> >> >
>> >> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
>> >> >> > agent1.sinks.HDFS.hdfs.type = hdfs
>> >> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
>> >> >> >
>> >> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
>> >> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
>> >> >> > agent1.channels.MemoryChannel-2.type = memory
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > flume.log
>> >> >> > ==========
>> >> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
>> >> >> > lifecycle
>> >> >> > supervisor 1
>> >> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting -
>> >> >> > agent1
>> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
>> >> >> > manager
>> >> >> > starting
>> >> >> > 12/09/17 15:40:05 INFO
>> >> >> > properties.PropertiesFileConfigurationProvider:
>> >> >> > Configuration provider starting
>> >> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
>> >> >> > lifecycle
>> >> >> > supervisor 10
>> >> >> > 12/09/17 15:40:05 INFO
>> >> >> > properties.PropertiesFileConfigurationProvider:
>> >> >> > Reloading configuration file:conf/flume.conf
>> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS
>> >> >> > Agent:
>> >> >> > agent1
>> >> >> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration
>> >> >> > empty
>> >> >> > for:
>> >> >> > HDFS.Removed.
>> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation
>> >> >> > flume
>> >> >> > configuration contains configuration  for agents: [agent1]
>> >> >> > 12/09/17 15:40:05 INFO
>> >> >> > properties.PropertiesFileConfigurationProvider:
>> >> >> > Creating channels
>> >> >> > 12/09/17 15:40:05 INFO
>> >> >> > properties.PropertiesFileConfigurationProvider:
>> >> >> > created channel MemoryChannel-2
>> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
>> >> >> > Starting
>> >> >> > new
>> >> >> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
>> >> >> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281} }
>> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
>> >> >> > Starting
>> >> >> > Channel MemoryChannel-2
>> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
>> >> >> > Starting
>> >> >> > Source tail
>> >> >> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting
>> >> >> > with
>> >> >> > command:tail -F
>> >> >> >
>> >> >> >
>> >> >> >
>> >> >> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> >> >> >
>> >> >> > Please suggest and help me on this issue.
>> >> >>
>> >> >>
>> >> >>
>> >> >> --
>> >> >> Apache MRUnit - Unit testing MapReduce -
>> >> >> http://incubator.apache.org/mrunit/
>> >> >
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Nitin Pawar
>> >
>> >
>>
>>
>>
>> --
>> Nitin Pawar
>>
>>
>>
>



-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/

Re: tail source exec unable to HDFS sink.

Posted by prabhu k <pr...@gmail.com>.
Hi,

Please find the following flume.conf & flume.log files.

I have marked in red colour below is that having any issue?

flume.conf
=============
agent1.sources = tail
agent1.channels = MemoryChannel-2
agent1.sinks = HDFS
agent1.sources.tail.type = exec
agent1.sources.tail.command = tail
/usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
agent1.sources.tail.channels = MemoryChannel-2
agent1.sources.tail.interceptors = hostint
agent1.sources.tail.interceptors.hostint.type =
org.apache.flume.interceptor.HostInterceptor$Builder
agent1.sources.tail.interceptors.hostint.preserveExisting = true
agent1.sources.tail.interceptors.hostint.useIP = false
agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
agent1.sinks.HDFS.type = hdfs
agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user/
agent1.sinks.HDFS.hdfs.fileType = dataStream
agent1.sinks.HDFS.hdfs.writeFormat = text
agent1.channels.MemoryChannel-2.type = memory

flume.log
================
12/09/18 16:52:16 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
supervisor 1
12/09/18 16:52:16 INFO node.FlumeNode: Flume node starting - agent1
12/09/18 16:52:16 INFO nodemanager.DefaultLogicalNodeManager: Node manager
starting
12/09/18 16:52:16 INFO properties.PropertiesFileConfigurationProvider:
Configuration provider starting
12/09/18 16:52:16 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
supervisor 10
12/09/18 16:52:16 INFO properties.PropertiesFileConfigurationProvider:
Reloading configuration file:conf/flume.conf
12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/18 16:52:16 INFO conf.FlumeConfiguration: Processing:HDFS
12/09/18 16:52:16 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
agent1
12/09/18 16:52:16 WARN conf.FlumeConfiguration: Configuration empty for:
HDFS.Removed.
12/09/18 16:52:16 INFO conf.FlumeConfiguration: Post-validation flume
configuration contains configuration  for agents: [agent1]
12/09/18 16:52:16 INFO properties.PropertiesFileConfigurationProvider:
Creating channels
12/09/18 16:52:16 INFO properties.PropertiesFileConfigurationProvider:
created channel MemoryChannel-2
12/09/18 16:52:16 INFO nodemanager.DefaultLogicalNodeManager: Starting new
configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
source:org.apache.flume.source.ExecSource@a1d1f4 }} sinkRunners:{}
channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@1df280b} }
12/09/18 16:52:16 INFO nodemanager.DefaultLogicalNodeManager: Starting
Channel MemoryChannel-2
12/09/18 16:52:16 INFO nodemanager.DefaultLogicalNodeManager: Starting
Source tail
12/09/18 16:52:16 INFO source.ExecSource: Exec source starting with
command:tail
/usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt

-Prabhu

On Tue, Sep 18, 2012 at 4:40 PM, Brock Noland <br...@cloudera.com> wrote:

> Yes they should work together. Please send the updated conf and log file.
>
> --
> Brock Noland
> Sent with Sparrow <http://www.sparrowmailapp.com/?sig>
>
>  On Tuesday, September 18, 2012 at 5:49 AM, prabhu k wrote:
>
>  I have tried both ways, but still not working....
> can you please confirm flume 1.2.0 support hadoop 1.0.3 version?
>
> Thanks,
> Prabhu.
> On Tue, Sep 18, 2012 at 3:32 PM, Nitin Pawar <ni...@gmail.com>wrote:
>
> can you write something in file continuously after you start flume-ng
>
> if you do tail -f it will start getting only new entries
> or you can just change the command  in the config file from tail -f to
> tail so each time it bring default last 10 lines from the the file
>
> ~nitin
>
> On Tue, Sep 18, 2012 at 2:51 PM, prabhu k <pr...@gmail.com> wrote:
> > Hi Nitin,
> >
> > While executing flume-ng, i have updated the flume_test.txt file,still
> > unable to do HDFS sink.
> >
> > Thanks,
> > Prabhu.
> >
> > On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar <ni...@gmail.com>
> > wrote:
> >>
> >> Hi Prabhu,
> >>
> >> are you sure there is continuous text being written to your file
> >> flume_test.txt.
> >>
> >> if nothing is written to that file, flume will not write anything into
> >> hdfs.
> >>
> >> On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <pr...@gmail.com>
> wrote:
> >> > Hi Brock,
> >> >
> >> > Thanks for the reply.
> >> >
> >> > As per your suggestion, i have modified,but still same issue.
> >> >
> >> > My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us
> >> > know is
> >> > there any incompatible version?
> >> >
> >> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com>
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> I believe, this line:
> >> >> agent1.sinks.HDFS.hdfs.type = hdfs
> >> >>
> >> >> should be:
> >> >> agent1.sinks.HDFS.type = hdfs
> >> >>
> >> >> Brock
> >> >>
> >> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com>
> >> >> wrote:
> >> >> > Hi Users,
> >> >> >
> >> >> > I have followed the below link for sample text file to HDFS sink
> >> >> > using
> >> >> > tail
> >> >> > source.
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
> >> >> >
> >> >> > I have executed flume-ng like as below command. it seems got stuck.
> >> >> > and
> >> >> > attached flume.conf file.
> >> >> >
> >> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
> >> >> >
> >> >> >
> >> >> > flume.conf
> >> >> > ==========
> >> >> > agent1.sources = tail
> >> >> > agent1.channels = MemoryChannel-2
> >> >> > agent1.sinks = HDFS
> >> >> >
> >> >> > agent1.sources.tail.type = exec
> >> >> > agent1.sources.tail.command = tail -F
> >> >> >
> >> >> >
> >> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> >> > agent1.sources.tail.channels = MemoryChannel-2
> >> >> >
> >> >> > agent1.sources.tail.interceptors = hostint
> >> >> > agent1.sources.tail.interceptors.hostint.type =
> >> >> > org.apache.flume.interceptor.HostInterceptor$Builder
> >> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
> >> >> > agent1.sources.tail.interceptors.hostint.useIP = false
> >> >> >
> >> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> >> >> > agent1.sinks.HDFS.hdfs.type = hdfs
> >> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
> >> >> >
> >> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
> >> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
> >> >> > agent1.channels.MemoryChannel-2.type = memory
> >> >> >
> >> >> >
> >> >> >
> >> >> > flume.log
> >> >> > ==========
> >> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> >> >> > lifecycle
> >> >> > supervisor 1
> >> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
> >> >> > manager
> >> >> > starting
> >> >> > 12/09/17 15:40:05 INFO
> >> >> > properties.PropertiesFileConfigurationProvider:
> >> >> > Configuration provider starting
> >> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> >> >> > lifecycle
> >> >> > supervisor 10
> >> >> > 12/09/17 15:40:05 INFO
> >> >> > properties.PropertiesFileConfigurationProvider:
> >> >> > Reloading configuration file:conf/flume.conf
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS
> >> >> > Agent:
> >> >> > agent1
> >> >> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty
> >> >> > for:
> >> >> > HDFS.Removed.
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation
> flume
> >> >> > configuration contains configuration  for agents: [agent1]
> >> >> > 12/09/17 15:40:05 INFO
> >> >> > properties.PropertiesFileConfigurationProvider:
> >> >> > Creating channels
> >> >> > 12/09/17 15:40:05 INFO
> >> >> > properties.PropertiesFileConfigurationProvider:
> >> >> > created channel MemoryChannel-2
> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> >> >> > Starting
> >> >> > new
> >> >> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> >> >> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> >> >> >
> >> >> >
> >> >> >
> channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281}
> }
> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> >> >> > Starting
> >> >> > Channel MemoryChannel-2
> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> >> >> > Starting
> >> >> > Source tail
> >> >> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> >> >> > command:tail -F
> >> >> >
> >> >> >
> >> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> >> >
> >> >> > Please suggest and help me on this issue.
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Apache MRUnit - Unit testing MapReduce -
> >> >> http://incubator.apache.org/mrunit/
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Nitin Pawar
> >
> >
>
>
>
> --
> Nitin Pawar
>
>
>
>

Re: tail source exec unable to HDFS sink.

Posted by Brock Noland <br...@cloudera.com>.
Yes they should work together. Please send the updated conf and log file.  

-- 
Brock Noland
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Tuesday, September 18, 2012 at 5:49 AM, prabhu k wrote:

> I have tried both ways, but still not working....
> can you please confirm flume 1.2.0 support hadoop 1.0.3 version? 
>  
> Thanks,
> Prabhu.
> On Tue, Sep 18, 2012 at 3:32 PM, Nitin Pawar <nitinpawar432@gmail.com (mailto:nitinpawar432@gmail.com)> wrote:
> > can you write something in file continuously after you start flume-ng
> > 
> > if you do tail -f it will start getting only new entries
> > or you can just change the command  in the config file from tail -f to
> > tail so each time it bring default last 10 lines from the the file
> > 
> > ~nitin
> > 
> > On Tue, Sep 18, 2012 at 2:51 PM, prabhu k <prabhu.flume@gmail.com (mailto:prabhu.flume@gmail.com)> wrote:
> > > Hi Nitin,
> > >
> > > While executing flume-ng, i have updated the flume_test.txt file,still
> > > unable to do HDFS sink.
> > >
> > > Thanks,
> > > Prabhu.
> > >
> > > On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar <nitinpawar432@gmail.com (mailto:nitinpawar432@gmail.com)>
> > > wrote:
> > >>
> > >> Hi Prabhu,
> > >>
> > >> are you sure there is continuous text being written to your file
> > >> flume_test.txt.
> > >>
> > >> if nothing is written to that file, flume will not write anything into
> > >> hdfs.
> > >>
> > >> On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <prabhu.flume@gmail.com (mailto:prabhu.flume@gmail.com)> wrote:
> > >> > Hi Brock,
> > >> >
> > >> > Thanks for the reply.
> > >> >
> > >> > As per your suggestion, i have modified,but still same issue.
> > >> >
> > >> > My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us
> > >> > know is
> > >> > there any incompatible version?
> > >> >
> > >> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <brock@cloudera.com (mailto:brock@cloudera.com)>
> > >> > wrote:
> > >> >>
> > >> >> Hi,
> > >> >>
> > >> >> I believe, this line:
> > >> >> agent1.sinks.HDFS.hdfs.type = hdfs
> > >> >>
> > >> >> should be:
> > >> >> agent1.sinks.HDFS.type = hdfs
> > >> >>
> > >> >> Brock
> > >> >>
> > >> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <prabhu.flume@gmail.com (mailto:prabhu.flume@gmail.com)>
> > >> >> wrote:
> > >> >> > Hi Users,
> > >> >> >
> > >> >> > I have followed the below link for sample text file to HDFS sink
> > >> >> > using
> > >> >> > tail
> > >> >> > source.
> > >> >> >
> > >> >> >
> > >> >> >
> > >> >> > http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
> > >> >> >
> > >> >> > I have executed flume-ng like as below command. it seems got stuck.
> > >> >> > and
> > >> >> > attached flume.conf file.
> > >> >> >
> > >> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
> > >> >> >
> > >> >> >
> > >> >> > flume.conf
> > >> >> > ==========
> > >> >> > agent1.sources = tail
> > >> >> > agent1.channels = MemoryChannel-2
> > >> >> > agent1.sinks = HDFS
> > >> >> >
> > >> >> > agent1.sources.tail.type = exec
> > >> >> > agent1.sources.tail.command = tail -F
> > >> >> >
> > >> >> >
> > >> >> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> > >> >> > agent1.sources.tail.channels = MemoryChannel-2
> > >> >> >
> > >> >> > agent1.sources.tail.interceptors = hostint
> > >> >> > agent1.sources.tail.interceptors.hostint.type =
> > >> >> > org.apache.flume.interceptor.HostInterceptor$Builder
> > >> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
> > >> >> > agent1.sources.tail.interceptors.hostint.useIP = false
> > >> >> >
> > >> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> > >> >> > agent1.sinks.HDFS.hdfs.type = hdfs
> > >> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
> > >> >> >
> > >> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
> > >> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
> > >> >> > agent1.channels.MemoryChannel-2.type = memory
> > >> >> >
> > >> >> >
> > >> >> >
> > >> >> > flume.log
> > >> >> > ==========
> > >> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> > >> >> > lifecycle
> > >> >> > supervisor 1
> > >> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> > >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
> > >> >> > manager
> > >> >> > starting
> > >> >> > 12/09/17 15:40:05 INFO
> > >> >> > properties.PropertiesFileConfigurationProvider:
> > >> >> > Configuration provider starting
> > >> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> > >> >> > lifecycle
> > >> >> > supervisor 10
> > >> >> > 12/09/17 15:40:05 INFO
> > >> >> > properties.PropertiesFileConfigurationProvider:
> > >> >> > Reloading configuration file:conf/flume.conf
> > >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS
> > >> >> > Agent:
> > >> >> > agent1
> > >> >> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty
> > >> >> > for:
> > >> >> > HDFS.Removed.
> > >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
> > >> >> > configuration contains configuration  for agents: [agent1]
> > >> >> > 12/09/17 15:40:05 INFO
> > >> >> > properties.PropertiesFileConfigurationProvider:
> > >> >> > Creating channels
> > >> >> > 12/09/17 15:40:05 INFO
> > >> >> > properties.PropertiesFileConfigurationProvider:
> > >> >> > created channel MemoryChannel-2
> > >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> > >> >> > Starting
> > >> >> > new
> > >> >> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> > >> >> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> > >> >> >
> > >> >> >
> > >> >> > channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281} }
> > >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> > >> >> > Starting
> > >> >> > Channel MemoryChannel-2
> > >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> > >> >> > Starting
> > >> >> > Source tail
> > >> >> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> > >> >> > command:tail -F
> > >> >> >
> > >> >> >
> > >> >> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> > >> >> >
> > >> >> > Please suggest and help me on this issue.
> > >> >>
> > >> >>
> > >> >>
> > >> >> --
> > >> >> Apache MRUnit - Unit testing MapReduce -
> > >> >> http://incubator.apache.org/mrunit/
> > >> >
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> Nitin Pawar
> > >
> > >
> > 
> > 
> > 
> > --
> > Nitin Pawar
> 


Re: tail source exec unable to HDFS sink.

Posted by prabhu k <pr...@gmail.com>.
I have tried both ways, but still not working....
can you please confirm flume 1.2.0 support hadoop 1.0.3 version?

Thanks,
Prabhu.
On Tue, Sep 18, 2012 at 3:32 PM, Nitin Pawar <ni...@gmail.com>wrote:

> can you write something in file continuously after you start flume-ng
>
> if you do tail -f it will start getting only new entries
> or you can just change the command  in the config file from tail -f to
> tail so each time it bring default last 10 lines from the the file
>
> ~nitin
>
> On Tue, Sep 18, 2012 at 2:51 PM, prabhu k <pr...@gmail.com> wrote:
> > Hi Nitin,
> >
> > While executing flume-ng, i have updated the flume_test.txt file,still
> > unable to do HDFS sink.
> >
> > Thanks,
> > Prabhu.
> >
> > On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar <ni...@gmail.com>
> > wrote:
> >>
> >> Hi Prabhu,
> >>
> >> are you sure there is continuous text being written to your file
> >> flume_test.txt.
> >>
> >> if nothing is written to that file, flume will not write anything into
> >> hdfs.
> >>
> >> On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <pr...@gmail.com>
> wrote:
> >> > Hi Brock,
> >> >
> >> > Thanks for the reply.
> >> >
> >> > As per your suggestion, i have modified,but still same issue.
> >> >
> >> > My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us
> >> > know is
> >> > there any incompatible version?
> >> >
> >> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com>
> >> > wrote:
> >> >>
> >> >> Hi,
> >> >>
> >> >> I believe, this line:
> >> >> agent1.sinks.HDFS.hdfs.type = hdfs
> >> >>
> >> >> should be:
> >> >> agent1.sinks.HDFS.type = hdfs
> >> >>
> >> >> Brock
> >> >>
> >> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com>
> >> >> wrote:
> >> >> > Hi Users,
> >> >> >
> >> >> > I have followed the below link for sample text file to HDFS sink
> >> >> > using
> >> >> > tail
> >> >> > source.
> >> >> >
> >> >> >
> >> >> >
> >> >> >
> http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
> >> >> >
> >> >> > I have executed flume-ng like as below command. it seems got stuck.
> >> >> > and
> >> >> > attached flume.conf file.
> >> >> >
> >> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
> >> >> >
> >> >> >
> >> >> > flume.conf
> >> >> > ==========
> >> >> > agent1.sources = tail
> >> >> > agent1.channels = MemoryChannel-2
> >> >> > agent1.sinks = HDFS
> >> >> >
> >> >> > agent1.sources.tail.type = exec
> >> >> > agent1.sources.tail.command = tail -F
> >> >> >
> >> >> >
> >> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> >> > agent1.sources.tail.channels = MemoryChannel-2
> >> >> >
> >> >> > agent1.sources.tail.interceptors = hostint
> >> >> > agent1.sources.tail.interceptors.hostint.type =
> >> >> > org.apache.flume.interceptor.HostInterceptor$Builder
> >> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
> >> >> > agent1.sources.tail.interceptors.hostint.useIP = false
> >> >> >
> >> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> >> >> > agent1.sinks.HDFS.hdfs.type = hdfs
> >> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
> >> >> >
> >> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
> >> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
> >> >> > agent1.channels.MemoryChannel-2.type = memory
> >> >> >
> >> >> >
> >> >> >
> >> >> > flume.log
> >> >> > ==========
> >> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> >> >> > lifecycle
> >> >> > supervisor 1
> >> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
> >> >> > manager
> >> >> > starting
> >> >> > 12/09/17 15:40:05 INFO
> >> >> > properties.PropertiesFileConfigurationProvider:
> >> >> > Configuration provider starting
> >> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> >> >> > lifecycle
> >> >> > supervisor 10
> >> >> > 12/09/17 15:40:05 INFO
> >> >> > properties.PropertiesFileConfigurationProvider:
> >> >> > Reloading configuration file:conf/flume.conf
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS
> >> >> > Agent:
> >> >> > agent1
> >> >> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty
> >> >> > for:
> >> >> > HDFS.Removed.
> >> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation
> flume
> >> >> > configuration contains configuration  for agents: [agent1]
> >> >> > 12/09/17 15:40:05 INFO
> >> >> > properties.PropertiesFileConfigurationProvider:
> >> >> > Creating channels
> >> >> > 12/09/17 15:40:05 INFO
> >> >> > properties.PropertiesFileConfigurationProvider:
> >> >> > created channel MemoryChannel-2
> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> >> >> > Starting
> >> >> > new
> >> >> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> >> >> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> >> >> >
> >> >> >
> >> >> >
> channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281}
> }
> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> >> >> > Starting
> >> >> > Channel MemoryChannel-2
> >> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
> >> >> > Starting
> >> >> > Source tail
> >> >> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> >> >> > command:tail -F
> >> >> >
> >> >> >
> >> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> >> >
> >> >> > Please suggest and help me on this issue.
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Apache MRUnit - Unit testing MapReduce -
> >> >> http://incubator.apache.org/mrunit/
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Nitin Pawar
> >
> >
>
>
>
> --
> Nitin Pawar
>

Re: tail source exec unable to HDFS sink.

Posted by Nitin Pawar <ni...@gmail.com>.
can you write something in file continuously after you start flume-ng

if you do tail -f it will start getting only new entries
or you can just change the command  in the config file from tail -f to
tail so each time it bring default last 10 lines from the the file

~nitin

On Tue, Sep 18, 2012 at 2:51 PM, prabhu k <pr...@gmail.com> wrote:
> Hi Nitin,
>
> While executing flume-ng, i have updated the flume_test.txt file,still
> unable to do HDFS sink.
>
> Thanks,
> Prabhu.
>
> On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar <ni...@gmail.com>
> wrote:
>>
>> Hi Prabhu,
>>
>> are you sure there is continuous text being written to your file
>> flume_test.txt.
>>
>> if nothing is written to that file, flume will not write anything into
>> hdfs.
>>
>> On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <pr...@gmail.com> wrote:
>> > Hi Brock,
>> >
>> > Thanks for the reply.
>> >
>> > As per your suggestion, i have modified,but still same issue.
>> >
>> > My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us
>> > know is
>> > there any incompatible version?
>> >
>> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com>
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> I believe, this line:
>> >> agent1.sinks.HDFS.hdfs.type = hdfs
>> >>
>> >> should be:
>> >> agent1.sinks.HDFS.type = hdfs
>> >>
>> >> Brock
>> >>
>> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com>
>> >> wrote:
>> >> > Hi Users,
>> >> >
>> >> > I have followed the below link for sample text file to HDFS sink
>> >> > using
>> >> > tail
>> >> > source.
>> >> >
>> >> >
>> >> >
>> >> > http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
>> >> >
>> >> > I have executed flume-ng like as below command. it seems got stuck.
>> >> > and
>> >> > attached flume.conf file.
>> >> >
>> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
>> >> >
>> >> >
>> >> > flume.conf
>> >> > ==========
>> >> > agent1.sources = tail
>> >> > agent1.channels = MemoryChannel-2
>> >> > agent1.sinks = HDFS
>> >> >
>> >> > agent1.sources.tail.type = exec
>> >> > agent1.sources.tail.command = tail -F
>> >> >
>> >> >
>> >> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> >> > agent1.sources.tail.channels = MemoryChannel-2
>> >> >
>> >> > agent1.sources.tail.interceptors = hostint
>> >> > agent1.sources.tail.interceptors.hostint.type =
>> >> > org.apache.flume.interceptor.HostInterceptor$Builder
>> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
>> >> > agent1.sources.tail.interceptors.hostint.useIP = false
>> >> >
>> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
>> >> > agent1.sinks.HDFS.hdfs.type = hdfs
>> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
>> >> >
>> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
>> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
>> >> > agent1.channels.MemoryChannel-2.type = memory
>> >> >
>> >> >
>> >> >
>> >> > flume.log
>> >> > ==========
>> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
>> >> > lifecycle
>> >> > supervisor 1
>> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
>> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
>> >> > manager
>> >> > starting
>> >> > 12/09/17 15:40:05 INFO
>> >> > properties.PropertiesFileConfigurationProvider:
>> >> > Configuration provider starting
>> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
>> >> > lifecycle
>> >> > supervisor 10
>> >> > 12/09/17 15:40:05 INFO
>> >> > properties.PropertiesFileConfigurationProvider:
>> >> > Reloading configuration file:conf/flume.conf
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS
>> >> > Agent:
>> >> > agent1
>> >> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty
>> >> > for:
>> >> > HDFS.Removed.
>> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
>> >> > configuration contains configuration  for agents: [agent1]
>> >> > 12/09/17 15:40:05 INFO
>> >> > properties.PropertiesFileConfigurationProvider:
>> >> > Creating channels
>> >> > 12/09/17 15:40:05 INFO
>> >> > properties.PropertiesFileConfigurationProvider:
>> >> > created channel MemoryChannel-2
>> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
>> >> > Starting
>> >> > new
>> >> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
>> >> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
>> >> >
>> >> >
>> >> > channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281} }
>> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
>> >> > Starting
>> >> > Channel MemoryChannel-2
>> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager:
>> >> > Starting
>> >> > Source tail
>> >> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
>> >> > command:tail -F
>> >> >
>> >> >
>> >> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> >> >
>> >> > Please suggest and help me on this issue.
>> >>
>> >>
>> >>
>> >> --
>> >> Apache MRUnit - Unit testing MapReduce -
>> >> http://incubator.apache.org/mrunit/
>> >
>> >
>>
>>
>>
>> --
>> Nitin Pawar
>
>



-- 
Nitin Pawar

Re: tail source exec unable to HDFS sink.

Posted by prabhu k <pr...@gmail.com>.
Hi Nitin,

While executing flume-ng, i have updated the flume_test.txt file,still
unable to do HDFS sink.

Thanks,
Prabhu.

On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar <ni...@gmail.com>wrote:

> Hi Prabhu,
>
> are you sure there is continuous text being written to your file
> flume_test.txt.
>
> if nothing is written to that file, flume will not write anything into
> hdfs.
>
> On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <pr...@gmail.com> wrote:
> > Hi Brock,
> >
> > Thanks for the reply.
> >
> > As per your suggestion, i have modified,but still same issue.
> >
> > My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us
> know is
> > there any incompatible version?
> >
> > On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com>
> wrote:
> >>
> >> Hi,
> >>
> >> I believe, this line:
> >> agent1.sinks.HDFS.hdfs.type = hdfs
> >>
> >> should be:
> >> agent1.sinks.HDFS.type = hdfs
> >>
> >> Brock
> >>
> >> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com>
> wrote:
> >> > Hi Users,
> >> >
> >> > I have followed the below link for sample text file to HDFS sink using
> >> > tail
> >> > source.
> >> >
> >> >
> >> >
> http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
> >> >
> >> > I have executed flume-ng like as below command. it seems got stuck.
> and
> >> > attached flume.conf file.
> >> >
> >> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
> >> >
> >> >
> >> > flume.conf
> >> > ==========
> >> > agent1.sources = tail
> >> > agent1.channels = MemoryChannel-2
> >> > agent1.sinks = HDFS
> >> >
> >> > agent1.sources.tail.type = exec
> >> > agent1.sources.tail.command = tail -F
> >> >
> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> > agent1.sources.tail.channels = MemoryChannel-2
> >> >
> >> > agent1.sources.tail.interceptors = hostint
> >> > agent1.sources.tail.interceptors.hostint.type =
> >> > org.apache.flume.interceptor.HostInterceptor$Builder
> >> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
> >> > agent1.sources.tail.interceptors.hostint.useIP = false
> >> >
> >> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> >> > agent1.sinks.HDFS.hdfs.type = hdfs
> >> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
> >> >
> >> > agent1.sinks.HDFS.hdfs.fileType = dataStream
> >> > agent1.sinks.HDFS.hdfs.writeFormat = text
> >> > agent1.channels.MemoryChannel-2.type = memory
> >> >
> >> >
> >> >
> >> > flume.log
> >> > ==========
> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> lifecycle
> >> > supervisor 1
> >> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
> >> > manager
> >> > starting
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Configuration provider starting
> >> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting
> lifecycle
> >> > supervisor 10
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Reloading configuration file:conf/flume.conf
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS
> Agent:
> >> > agent1
> >> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty
> for:
> >> > HDFS.Removed.
> >> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
> >> > configuration contains configuration  for agents: [agent1]
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > Creating channels
> >> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> >> > created channel MemoryChannel-2
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > new
> >> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> >> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> >> >
> >> >
> channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281}
> }
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > Channel MemoryChannel-2
> >> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> >> > Source tail
> >> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> >> > command:tail -F
> >> >
> >> >
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >> >
> >> > Please suggest and help me on this issue.
> >>
> >>
> >>
> >> --
> >> Apache MRUnit - Unit testing MapReduce -
> >> http://incubator.apache.org/mrunit/
> >
> >
>
>
>
> --
> Nitin Pawar
>

Re: tail source exec unable to HDFS sink.

Posted by Nitin Pawar <ni...@gmail.com>.
Hi Prabhu,

are you sure there is continuous text being written to your file
flume_test.txt.

if nothing is written to that file, flume will not write anything into hdfs.

On Tue, Sep 18, 2012 at 2:31 PM, prabhu k <pr...@gmail.com> wrote:
> Hi Brock,
>
> Thanks for the reply.
>
> As per your suggestion, i have modified,but still same issue.
>
> My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us know is
> there any incompatible version?
>
> On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com> wrote:
>>
>> Hi,
>>
>> I believe, this line:
>> agent1.sinks.HDFS.hdfs.type = hdfs
>>
>> should be:
>> agent1.sinks.HDFS.type = hdfs
>>
>> Brock
>>
>> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com> wrote:
>> > Hi Users,
>> >
>> > I have followed the below link for sample text file to HDFS sink using
>> > tail
>> > source.
>> >
>> >
>> > http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
>> >
>> > I have executed flume-ng like as below command. it seems got stuck. and
>> > attached flume.conf file.
>> >
>> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
>> >
>> >
>> > flume.conf
>> > ==========
>> > agent1.sources = tail
>> > agent1.channels = MemoryChannel-2
>> > agent1.sinks = HDFS
>> >
>> > agent1.sources.tail.type = exec
>> > agent1.sources.tail.command = tail -F
>> >
>> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> > agent1.sources.tail.channels = MemoryChannel-2
>> >
>> > agent1.sources.tail.interceptors = hostint
>> > agent1.sources.tail.interceptors.hostint.type =
>> > org.apache.flume.interceptor.HostInterceptor$Builder
>> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
>> > agent1.sources.tail.interceptors.hostint.useIP = false
>> >
>> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
>> > agent1.sinks.HDFS.hdfs.type = hdfs
>> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
>> >
>> > agent1.sinks.HDFS.hdfs.fileType = dataStream
>> > agent1.sinks.HDFS.hdfs.writeFormat = text
>> > agent1.channels.MemoryChannel-2.type = memory
>> >
>> >
>> >
>> > flume.log
>> > ==========
>> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
>> > supervisor 1
>> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
>> > manager
>> > starting
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > Configuration provider starting
>> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
>> > supervisor 10
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > Reloading configuration file:conf/flume.conf
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
>> > agent1
>> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty for:
>> > HDFS.Removed.
>> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
>> > configuration contains configuration  for agents: [agent1]
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > Creating channels
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > created channel MemoryChannel-2
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > new
>> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
>> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
>> >
>> > channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281} }
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > Channel MemoryChannel-2
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > Source tail
>> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
>> > command:tail -F
>> >
>> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> >
>> > Please suggest and help me on this issue.
>>
>>
>>
>> --
>> Apache MRUnit - Unit testing MapReduce -
>> http://incubator.apache.org/mrunit/
>
>



-- 
Nitin Pawar

Re: tail source exec unable to HDFS sink.

Posted by prabhu k <pr...@gmail.com>.
Hi Brock,

Thanks for the reply.

As per your suggestion, i have modified,but still same issue.

My hadoop version is : 1.0.3 & Flume version is 1.2.0. Please let us know
is there any incompatible version?

On Mon, Sep 17, 2012 at 8:01 PM, Brock Noland <br...@cloudera.com> wrote:

> Hi,
>
> I believe, this line:
> agent1.sinks.HDFS.hdfs.type = hdfs
>
> should be:
> agent1.sinks.HDFS.type = hdfs
>
> Brock
>
> On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com> wrote:
> > Hi Users,
> >
> > I have followed the below link for sample text file to HDFS sink using
> tail
> > source.
> >
> >
> http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
> >
> > I have executed flume-ng like as below command. it seems got stuck. and
> > attached flume.conf file.
> >
> > #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
> >
> >
> > flume.conf
> > ==========
> > agent1.sources = tail
> > agent1.channels = MemoryChannel-2
> > agent1.sinks = HDFS
> >
> > agent1.sources.tail.type = exec
> > agent1.sources.tail.command = tail -F
> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> > agent1.sources.tail.channels = MemoryChannel-2
> >
> > agent1.sources.tail.interceptors = hostint
> > agent1.sources.tail.interceptors.hostint.type =
> > org.apache.flume.interceptor.HostInterceptor$Builder
> > agent1.sources.tail.interceptors.hostint.preserveExisting = true
> > agent1.sources.tail.interceptors.hostint.useIP = false
> >
> > agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> > agent1.sinks.HDFS.hdfs.type = hdfs
> > agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
> >
> > agent1.sinks.HDFS.hdfs.fileType = dataStream
> > agent1.sinks.HDFS.hdfs.writeFormat = text
> > agent1.channels.MemoryChannel-2.type = memory
> >
> >
> >
> > flume.log
> > ==========
> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> > supervisor 1
> > 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node
> manager
> > starting
> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> > Configuration provider starting
> > 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> > supervisor 10
> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> > Reloading configuration file:conf/flume.conf
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
> > agent1
> > 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty for:
> > HDFS.Removed.
> > 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
> > configuration contains configuration  for agents: [agent1]
> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> > Creating channels
> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> > created channel MemoryChannel-2
> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> new
> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> > channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281}
> }
> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> > Channel MemoryChannel-2
> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> > Source tail
> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> > command:tail -F
> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> >
> > Please suggest and help me on this issue.
>
>
>
> --
> Apache MRUnit - Unit testing MapReduce -
> http://incubator.apache.org/mrunit/
>

Re: tail source exec unable to HDFS sink.

Posted by Brock Noland <br...@cloudera.com>.
Hi,

I believe, this line:
agent1.sinks.HDFS.hdfs.type = hdfs

should be:
agent1.sinks.HDFS.type = hdfs

Brock

On Mon, Sep 17, 2012 at 5:17 AM, prabhu k <pr...@gmail.com> wrote:
> Hi Users,
>
> I have followed the below link for sample text file to HDFS sink using tail
> source.
>
> http://cloudfront.blogspot.in/2012/06/how-to-use-host-escape-sequence-in.html#more
>
> I have executed flume-ng like as below command. it seems got stuck. and
> attached flume.conf file.
>
> #bin/flume-ng agent -n agent1 -c /conf -f conf/flume.conf
>
>
> flume.conf
> ==========
> agent1.sources = tail
> agent1.channels = MemoryChannel-2
> agent1.sinks = HDFS
>
> agent1.sources.tail.type = exec
> agent1.sources.tail.command = tail -F
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
> agent1.sources.tail.channels = MemoryChannel-2
>
> agent1.sources.tail.interceptors = hostint
> agent1.sources.tail.interceptors.hostint.type =
> org.apache.flume.interceptor.HostInterceptor$Builder
> agent1.sources.tail.interceptors.hostint.preserveExisting = true
> agent1.sources.tail.interceptors.hostint.useIP = false
>
> agent1.sinks.HDFS.hdfs.channel = MemoryChannel-2
> agent1.sinks.HDFS.hdfs.type = hdfs
> agent1.sinks.HDFS.hdfs.path = hdfs://<hostname>:54310/user
>
> agent1.sinks.HDFS.hdfs.fileType = dataStream
> agent1.sinks.HDFS.hdfs.writeFormat = text
> agent1.channels.MemoryChannel-2.type = memory
>
>
>
> flume.log
> ==========
> 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> supervisor 1
> 12/09/17 15:40:05 INFO node.FlumeNode: Flume node starting - agent1
> 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Node manager
> starting
> 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> Configuration provider starting
> 12/09/17 15:40:05 INFO lifecycle.LifecycleSupervisor: Starting lifecycle
> supervisor 10
> 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> Reloading configuration file:conf/flume.conf
> 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Processing:HDFS
> 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent:
> agent1
> 12/09/17 15:40:05 WARN conf.FlumeConfiguration: Configuration empty for:
> HDFS.Removed.
> 12/09/17 15:40:05 INFO conf.FlumeConfiguration: Post-validation flume
> configuration contains configuration  for agents: [agent1]
> 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> Creating channels
> 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
> created channel MemoryChannel-2
> 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting new
> configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
> source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
> channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281} }
> 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel MemoryChannel-2
> 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Source tail
> 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
> command:tail -F
> /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>
> Please suggest and help me on this issue.



-- 
Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/