You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Sarath P R <sa...@gmail.com> on 2013/01/31 08:21:16 UTC
(Unknown)
I am sorry . I got it. I posted the question before start thinking in the
right way. It was just a mistake in flume.conf. Sorry and Thanks :)
On Thu, Jan 31, 2013 at 12:41 PM, Sarath P R <sa...@gmail.com>wrote:
>
> Hi All,
>
> I am able to to tail a file and sink with hdfs
>
> But when I am trying the twitter custom source I get following errors in
> flume.log
>
> I am working with hadoop 1.0.4 and flume NG 1.3.1
>
> 31 Jan 2013 11:13:48,925 INFO [lifecycleSupervisor-1-0]
> (org.apache.flume.instrumentation.MonitoredCounterGroup.register:89) -
> Monitoried counter group for type: CHANNEL, name: MemChannel, registered
> successfully.
> 31 Jan 2013 11:13:48,926 INFO [lifecycleSupervisor-1-0]
> (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73) -
> Component type: CHANNEL, name: MemChannel started
> 31 Jan 2013 11:13:48,926 INFO [conf-file-poller-0]
> (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.startAllComponents:141)
> - Starting Sink HDFS
> 31 Jan 2013 11:13:48,932 INFO [conf-file-poller-0]
> (org.apache.flume.node.nodemanager.DefaultLogicalNodeManager.startAllComponents:152)
> - Starting Source Twitter
> 31 Jan 2013 11:13:48,934 INFO [lifecycleSupervisor-1-1]
> (org.apache.flume.instrumentation.MonitoredCounterGroup.register:89) -
> Monitoried counter group for type: SINK, name: HDFS, registered
> successfully.
> 31 Jan 2013 11:13:48,934 INFO [lifecycleSupervisor-1-1]
> (org.apache.flume.instrumentation.MonitoredCounterGroup.start:73) -
> Component type: SINK, name: HDFS started
> 31 Jan 2013 11:13:48,938 INFO [Twitter Stream consumer-1[initializing]] (
> twitter4j.internal.logging.SLF4JLogger.info:83) - Establishing
> connection.
> 31 Jan 2013 11:13:51,304 INFO [Twitter Stream consumer-1[Establishing
> connection]] (twitter4j.internal.logging.SLF4JLogger.info:83) -*Connection established.
> *
> 31 Jan 2013 11:13:51,305 INFO [Twitter Stream consumer-1[Establishing
> connection]] (twitter4j.internal.logging.SLF4JLogger.info:83) - *Receiving
> status stream.*
> 31 Jan 2013 11:13:52,884 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 0
> time(s).
> 31 Jan 2013 11:13:53,885 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 1
> time(s).
> 31 Jan 2013 11:13:54,886 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 2
> time(s).
> 31 Jan 2013 11:13:55,887 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 3
> time(s).
> 31 Jan 2013 11:13:56,888 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 4
> time(s).
> 31 Jan 2013 11:13:57,888 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 5
> time(s).
> 31 Jan 2013 11:13:58,889 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 6
> time(s).
> 31 Jan 2013 11:13:59,890 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 7
> time(s).
> 31 Jan 2013 11:14:00,890 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 8
> time(s).
> 31 Jan 2013 11:14:01,613 INFO [hdfs-HDFS-call-runner-0]
> (org.apache.hadoop.ipc.Client$Connection.handleConnectionFailure:666) -
> Retrying connect to server: datanode3/10.31.0.30:5430. Already tried 9
> time(s).
> 31 Jan 2013 11:14:01,614 WARN
> [SinkRunner-PollingRunner-DefaultSinkProcessor]
> (org.apache.flume.sink.hdfs.HDFSEventSink.process:456) - *HDFS IO error
> java.io.IOException: Callable timed out after 10000 ms*
> at
> org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.java:352)
> at
> org.apache.flume.sink.hdfs.HDFSEventSink.append(HDFSEventSink.java:727)
> at
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:430)
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
> at
> org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
> at java.lang.Thread.run(Thread.java:722)
> Caused by: java.util.concurrent.TimeoutException
> at
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:258)
> at java.util.concurrent.FutureTask.get(FutureTask.java:119)
> at
> org.apache.flume.sink.hdfs.HDFSEventSink.callWithTimeout(HDFSEventSink.java:345)
> ... 5 more
>
>
> Any thoughts ? Thanks in advance
>
> --
> Thank You
> Sarath P R | cell +91 99 95 02 4287 | http://sprism.blogspot.com
>
>
--
Thank You
Sarath P R | cell +91 99 95 02 4287 | http://sprism.blogspot.com