You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Jonathan <jo...@gmail.com> on 2011/09/28 16:30:10 UTC

Having trouble writing to HDFS

Hi,

I am having trouble getting flume to output to HDFS. My configurations are
loading and I can watch the events be transferred across the Flume but
nothing is showing up in HDFS. Here is the configuration I am using config
 [collector1, collectorSource(35853),
collectorSink("hdfs://hidden_hdfs_location/flumeTemp/" , "test")]]

Thanks for your help
Jonathan

Re: Having trouble writing to HDFS

Posted by Jonathan <jo...@gmail.com>.
If It helps, here is my log file,
2011-09-28 11:31:38,923 INFO com.cloudera.flume.handlers.rolling.RollSink:
Created RollSink: trigger=[TimeTrigger: maxAge=30000
tagger=com.cloudera.flume.handlers.rolling.ProcessTagger@162e07d]
checkPeriodM$
2011-09-28 11:31:43,933 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 1 heartbeats
2011-09-28 11:31:48,943 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 2 heartbeats
2011-09-28 11:31:48,943 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 3 heartbeats
2011-09-28 11:31:53,953 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 4 heartbeats
2011-09-28 11:31:53,953 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 5 heartbeats
2011-09-28 11:31:58,963 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 6 heartbeats
2011-09-28 11:31:58,963 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 7 heartbeats
2011-09-28 11:31:59,473 WARN com.cloudera.flume.conf.FlumeBuilder:
Deprecated syntax: Expected a format spec but instead had a (String)
avrojson
2011-09-28 11:32:03,973 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 8 heartbeats
2011-09-28 11:32:03,973 WARN com.cloudera.flume.agent.LivenessManager:
Heartbeats are backing up, currently behind by 9 heartbeats
2011-09-28 11:32:08,932 ERROR com.cloudera.flume.agent.LogicalNode: Forcing
driver to exit uncleanly
2011-09-28 11:32:08,932 ERROR
com.cloudera.flume.core.connector.DirectDriver: Closing down due to
exception during append calls
java.io.IOException: Waiting for queue element was interrupted! null
        at
com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:222)
        at
com.cloudera.flume.collector.CollectorSource.next(CollectorSource.java:72)
        at
com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:105)
Caused by: java.lang.InterruptedException
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1961)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2038)
        at
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at
com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:209)
        ... 2 more
2011-09-28 11:32:08,933 INFO com.cloudera.flume.core.connector.DirectDriver:
Connector logicalNode collector1-2137 exited with error: Waiting for queue
element was interrupted! null
java.io.IOException: Waiting for queue element was interrupted! null
        at
com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:222)
        at
com.cloudera.flume.collector.CollectorSource.next(CollectorSource.java:72)
        at
com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:105)
Caused by: java.lang.InterruptedException
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1961)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2038)
        at
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:424)
        at
com.cloudera.flume.handlers.thrift.ThriftEventSource.next(ThriftEventSource.java:209)
        ... 2 more
2011-09-28 11:32:08,933 INFO com.cloudera.flume.collector.CollectorSource:
closed
2011-09-28 11:32:08,933 INFO com.cloudera.flume.agent.LogicalNode: Node
config successfully set to com.cloudera.flume.conf.FlumeConfigData@1348e2f
2011-09-28 11:32:08,936 INFO
com.cloudera.flume.handlers.thrift.ThriftEventSource: Closed server on port
35853...
2011-09-28 11:32:08,936 INFO
com.cloudera.flume.handlers.thrift.ThriftEventSource: Queue still has 0
elements ...
2011-09-28 11:32:08,936 INFO com.cloudera.flume.handlers.rolling.RollSink:
closing RollSink 'escapedCustomDfs("hdfs://
ec2-174-129-89-0.compute-1.amazonaws.com:54310/user/flume/","syslog%{rolltag}"
)'
2011-09-28 11:32:08,937 ERROR
com.cloudera.flume.core.connector.DirectDriver: Exiting driver logicalNode
collector1-2137 in error state CollectorSource | Collector because Waiting
for queue element was inte$
2011-09-28 11:32:08,937 INFO com.cloudera.flume.collector.CollectorSource:
opened
2011-09-28 11:32:08,937 INFO
com.cloudera.flume.handlers.thrift.ThriftEventSource: Starting blocking
thread pool server on port 35853...
2011-09-28 11:32:08,937 INFO com.cloudera.flume.handlers.rolling.RollSink:
opening RollSink
'escapedCustomDfs("hdfs://10.116.98.79:54310/user/flume/","syslog%{rolltag}"
)'
2011-09-28 11:32:08,940 WARN com.cloudera.flume.conf.FlumeBuilder:
Deprecated syntax: Expected a format spec but instead had a (String)
avrojson
2011-09-28 11:32:08,940 INFO
com.cloudera.flume.handlers.debug.InsistentOpenDecorator: Opened
MaskDecorator on try 0

Jonathan


On Wed, Sep 28, 2011 at 11:16 AM, Jonathan <jo...@gmail.com> wrote:

> To the best of my knowledge they are both running the same hdfs version. I
> installed both from Cloudera's CDH
>
> Jonathan
>
>
>
> On Wed, Sep 28, 2011 at 11:12 AM, Justin Workman <justinjworkman@gmail.com
> > wrote:
>
>> I have also seem this happen if the hadoop version on the collector nodes
>> is a different version than the hdfs version you are writhing to.
>>
>> Sent from my iPhone
>>
>> On Sep 28, 2011, at 8:40 AM, Jonathan <jo...@gmail.com> wrote:
>>
>> Hi,
>>
>> Yeah, that doesn't seem to help. Thanks for trying though.
>>
>> Jonathan
>>
>>
>> On Wed, Sep 28, 2011 at 10:35 AM, steve layland <<s...@gmail.com>
>> stevieplayland@gmail.com> wrote:
>>
>>> 54310
>>
>>
>>
>

Re: Having trouble writing to HDFS

Posted by Jonathan <jo...@gmail.com>.
To the best of my knowledge they are both running the same hdfs version. I
installed both from Cloudera's CDH

Jonathan


On Wed, Sep 28, 2011 at 11:12 AM, Justin Workman
<ju...@gmail.com>wrote:

> I have also seem this happen if the hadoop version on the collector nodes
> is a different version than the hdfs version you are writhing to.
>
> Sent from my iPhone
>
> On Sep 28, 2011, at 8:40 AM, Jonathan <jo...@gmail.com> wrote:
>
> Hi,
>
> Yeah, that doesn't seem to help. Thanks for trying though.
>
> Jonathan
>
>
> On Wed, Sep 28, 2011 at 10:35 AM, steve layland <<s...@gmail.com>
> stevieplayland@gmail.com> wrote:
>
>> 54310
>
>
>

Re: Having trouble writing to HDFS

Posted by Justin Workman <ju...@gmail.com>.
I have also seem this happen if the hadoop version on the collector nodes is a different version than the hdfs version you are writhing to. 

Sent from my iPhone

On Sep 28, 2011, at 8:40 AM, Jonathan <jo...@gmail.com> wrote:

> Hi,
> 
> Yeah, that doesn't seem to help. Thanks for trying though.
> 
> Jonathan
> 
> 
> On Wed, Sep 28, 2011 at 10:35 AM, steve layland <st...@gmail.com> wrote:
> 54310
> 

Re: Having trouble writing to HDFS

Posted by Jonathan <jo...@gmail.com>.
Hi,

Yeah, that doesn't seem to help. Thanks for trying though.

Jonathan


On Wed, Sep 28, 2011 at 10:35 AM, steve layland <st...@gmail.com>wrote:

> 54310

Re: Having trouble writing to HDFS

Posted by steve layland <st...@gmail.com>.
Hard to say if this is due to the same problem, but i solved this by using a
fully qualified namenode like: hdfs://your.namenode:54310/path/to/sink.

Hope that helps!
On Sep 28, 2011 7:31 AM, "Jonathan" <jo...@gmail.com> wrote:
> Hi,
>
> I am having trouble getting flume to output to HDFS. My configurations are
> loading and I can watch the events be transferred across the Flume but
> nothing is showing up in HDFS. Here is the configuration I am using config
> [collector1, collectorSource(35853),
> collectorSink("hdfs://hidden_hdfs_location/flumeTemp/" , "test")]]
>
> Thanks for your help
> Jonathan