You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Abhi Dubey <ab...@gmail.com> on 2012/11/30 06:50:33 UTC

error writing to collectorSink on mac os x

> Hi,
> 
> I am running hadoop in pseudo-distributed mode on mac os x 10.8.2. I installed flume (via homebrew) and I can write to console and local file system using text. Hadoop was also installed via homebrew.
> 
> But if I use collectorSink and either a file target file:/// or hdfs://, I get an error. The error is same for both types of targets.
> 
> 
> 2012-11-29 20:22:47,182 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 6 failed, backoff (60000ms): failure to login
> 2012-11-29 20:23:47,181 [pool-8-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202317290-0800.1354249397290310000.00000037
> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO debug.StubbornAppendSink: append failed on event 'new-host-2.home [INFO Thu Nov 29 20:21:44 PST 2012] #' with error: failure to login
> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: closing RollSink 'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: opening RollSink  'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
> 2012-11-29 20:23:47,184 [logicalNode new-host-2.home-21] INFO debug.InsistentOpenDecorator: Opened MaskDecorator on try 0
> 2012-11-29 20:23:47,185 [pool-9-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202347184-0800.1354249427184178000.00000021
> 2012-11-29 20:23:47,192 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 7 failed, backoff (60000ms): failure to login
> 
> I am not sure what is going on. The permissions to the directory is 777. Can anyone help with this error?
> 
> 
> Thanks,
> 
> Abhi


Re: error writing to collectorSink on mac os x

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Ah, got it.

You are use flume 0.9.x - which is a pretty old version. We're on FlumeNG - actually Flume 1.2.0, but 1.3.0 will be released very soon. I strongly suggest to move over, since 0.9.x will not get so much attention ;)

For Flume 1.2. please check:
https://cwiki.apache.org/confluence/display/FLUME/Home

UserGuide:
http://flume.apache.org/FlumeUserGuide.html

And a tip beside - please don't use homebrew for hadoop related projects. They all can be build natively on OSX :)

cheers,
 Alex

On Nov 30, 2012, at 8:34 AM, Abhi Dubey <ab...@gmail.com> wrote:

> Alex,
> 
> Thanks for the quick reply. I am afraid I don't understand which config file are you asking for. Do you mean the config file I used to setup flume source and sink? If that is so, I just used the web interface http://localhost:35871/flumemaster.jsp to set the config. 
> I used source: text("/etc/services")
> I used Sink: collectorSink("hdfs://localhost:8020/user/abhi/flume/","test file")
> 
> This throws the error that I wrote in the previous message. I also tried a different datacollectod sink - writing to file system.
> source: text("/etc/services")
> Sink: collectorSink("file:///Users/abhi/","testfile")
> It throws the same error. 
> 
> But if I use
> source: text("/etc/services")
> sink: cosole
> 
> It works. It also works if I use a text sink.
> source: text("/etc/services")
> sink: text("services.copy")
> 
> 
> Can you tell me once again which config file do you need to look at?
> 
> As per the homebrew formulas, hadoop is version 1.0.4. And flume is version 0.9.4-cdh3u2. After installing flume, I replaced the jar file with that from hadoop install.
> 
> 
> Thanks,
> 
> Abhi
> 
> 
> On Nov 29, 2012, at 11:17 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:
> 
>> Hi,
>> 
>> Can you connect to your HDFS instance? Attach the config file for further debug.
>> Btw, per homebrew installed HDFS / Flume / whatever depends on the recipes homebrew uses. Check this first please.
>> 
>> Best
>> - Alex
>> 
>> 
>> On Nov 30, 2012, at 6:50 AM, Abhi Dubey <ab...@gmail.com> wrote:
>> 
>>>> Hi,
>>>> 
>>>> I am running hadoop in pseudo-distributed mode on mac os x 10.8.2. I installed flume (via homebrew) and I can write to console and local file system using text. Hadoop was also installed via homebrew.
>>>> 
>>>> But if I use collectorSink and either a file target file:/// or hdfs://, I get an error. The error is same for both types of targets.
>>>> 
>>>> 
>>>> 2012-11-29 20:22:47,182 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 6 failed, backoff (60000ms): failure to login
>>>> 2012-11-29 20:23:47,181 [pool-8-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202317290-0800.1354249397290310000.00000037
>>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO debug.StubbornAppendSink: append failed on event 'new-host-2.home [INFO Thu Nov 29 20:21:44 PST 2012] #' with error: failure to login
>>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: closing RollSink 'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
>>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: opening RollSink  'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
>>>> 2012-11-29 20:23:47,184 [logicalNode new-host-2.home-21] INFO debug.InsistentOpenDecorator: Opened MaskDecorator on try 0
>>>> 2012-11-29 20:23:47,185 [pool-9-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202347184-0800.1354249427184178000.00000021
>>>> 2012-11-29 20:23:47,192 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 7 failed, backoff (60000ms): failure to login
>>>> 
>>>> I am not sure what is going on. The permissions to the directory is 777. Can anyone help with this error?
>>>> 
>>>> 
>>>> Thanks,
>>>> 
>>>> Abhi
>>> 
>> 
>> --
>> Alexander Alten-Lorenz
>> http://mapredit.blogspot.com
>> German Hadoop LinkedIn Group: http://goo.gl/N8pCF
>> 
> 

--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF


Re: error writing to collectorSink on mac os x

Posted by Abhi Dubey <ab...@gmail.com>.
Alex,

Thanks for the quick reply. I am afraid I don't understand which config file are you asking for. Do you mean the config file I used to setup flume source and sink? If that is so, I just used the web interface http://localhost:35871/flumemaster.jsp to set the config. 
I used source: text("/etc/services")
I used Sink: collectorSink("hdfs://localhost:8020/user/abhi/flume/","test file")

This throws the error that I wrote in the previous message. I also tried a different datacollectod sink - writing to file system.
source: text("/etc/services")
Sink: collectorSink("file:///Users/abhi/","testfile")
It throws the same error. 

But if I use
source: text("/etc/services")
sink: cosole

It works. It also works if I use a text sink.
source: text("/etc/services")
sink: text("services.copy")


Can you tell me once again which config file do you need to look at?

As per the homebrew formulas, hadoop is version 1.0.4. And flume is version 0.9.4-cdh3u2. After installing flume, I replaced the jar file with that from hadoop install.


Thanks,

Abhi


On Nov 29, 2012, at 11:17 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:

> Hi,
> 
> Can you connect to your HDFS instance? Attach the config file for further debug.
> Btw, per homebrew installed HDFS / Flume / whatever depends on the recipes homebrew uses. Check this first please.
> 
> Best
> - Alex
> 
> 
> On Nov 30, 2012, at 6:50 AM, Abhi Dubey <ab...@gmail.com> wrote:
> 
>>> Hi,
>>> 
>>> I am running hadoop in pseudo-distributed mode on mac os x 10.8.2. I installed flume (via homebrew) and I can write to console and local file system using text. Hadoop was also installed via homebrew.
>>> 
>>> But if I use collectorSink and either a file target file:/// or hdfs://, I get an error. The error is same for both types of targets.
>>> 
>>> 
>>> 2012-11-29 20:22:47,182 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 6 failed, backoff (60000ms): failure to login
>>> 2012-11-29 20:23:47,181 [pool-8-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202317290-0800.1354249397290310000.00000037
>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO debug.StubbornAppendSink: append failed on event 'new-host-2.home [INFO Thu Nov 29 20:21:44 PST 2012] #' with error: failure to login
>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: closing RollSink 'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: opening RollSink  'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
>>> 2012-11-29 20:23:47,184 [logicalNode new-host-2.home-21] INFO debug.InsistentOpenDecorator: Opened MaskDecorator on try 0
>>> 2012-11-29 20:23:47,185 [pool-9-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202347184-0800.1354249427184178000.00000021
>>> 2012-11-29 20:23:47,192 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 7 failed, backoff (60000ms): failure to login
>>> 
>>> I am not sure what is going on. The permissions to the directory is 777. Can anyone help with this error?
>>> 
>>> 
>>> Thanks,
>>> 
>>> Abhi
>> 
> 
> --
> Alexander Alten-Lorenz
> http://mapredit.blogspot.com
> German Hadoop LinkedIn Group: http://goo.gl/N8pCF
> 


Re: error writing to collectorSink on mac os x

Posted by Abhi Dubey <ab...@gmail.com>.
Alex,

I just wanted to let you know that your tip to install flume 1.x worked. I am able to set an hdfs sink and write correctly. Thanks!


Abhi

On Nov 29, 2012, at 11:33 PM, Abhi Dubey <ab...@gmail.com> wrote:

> Alex,
> 
> Thanks for the quick reply. I am afraid I don't understand which config file are you asking for. Do you mean the config file I used to setup flume source and sink? If that is so, I just used the web interface http://localhost:35871/flumemaster.jsp to set the config. 
> I used source: text("/etc/services")
> I used Sink: collectorSink("hdfs://localhost:8020/user/abhi/flume/","test file")
> 
> This throws the error that I wrote in the previous message. I also tried a different datacollectod sink - writing to file system.
> source: text("/etc/services")
> Sink: collectorSink("file:///Users/abhi/","testfile")
> It throws the same error. 
> 
> But if I use
> source: text("/etc/services")
> sink: cosole
> 
> It works. It also works if I use a text sink.
> source: text("/etc/services")
> sink: text("services.copy")
> 
> 
> Can you tell me once again which config file do you need to look at?
> 
> As per the homebrew formulas, hadoop is version 1.0.4. And flume is version 0.9.4-cdh3u2. After installing flume, I re
> 
> 
> On Nov 29, 2012, at 11:17 PM, Alexander Alten-Lorenz <wg...@gmail.com> wrote:
> 
>> Hi,
>> 
>> Can you connect to your HDFS instance? Attach the config file for further debug.
>> Btw, per homebrew installed HDFS / Flume / whatever depends on the recipes homebrew uses. Check this first please.
>> 
>> Best
>> - Alex
>> 
>> 
>> On Nov 30, 2012, at 6:50 AM, Abhi Dubey <ab...@gmail.com> wrote:
>> 
>>>> Hi,
>>>> 
>>>> I am running hadoop in pseudo-distributed mode on mac os x 10.8.2. I installed flume (via homebrew) and I can write to console and local file system using text. Hadoop was also installed via homebrew.
>>>> 
>>>> But if I use collectorSink and either a file target file:/// or hdfs://, I get an error. The error is same for both types of targets.
>>>> 
>>>> 
>>>> 2012-11-29 20:22:47,182 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 6 failed, backoff (60000ms): failure to login
>>>> 2012-11-29 20:23:47,181 [pool-8-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202317290-0800.1354249397290310000.00000037
>>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO debug.StubbornAppendSink: append failed on event 'new-host-2.home [INFO Thu Nov 29 20:21:44 PST 2012] #' with error: failure to login
>>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: closing RollSink 'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
>>>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: opening RollSink  'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
>>>> 2012-11-29 20:23:47,184 [logicalNode new-host-2.home-21] INFO debug.InsistentOpenDecorator: Opened MaskDecorator on try 0
>>>> 2012-11-29 20:23:47,185 [pool-9-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202347184-0800.1354249427184178000.00000021
>>>> 2012-11-29 20:23:47,192 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 7 failed, backoff (60000ms): failure to login
>>>> 
>>>> I am not sure what is going on. The permissions to the directory is 777. Can anyone help with this error?
>>>> 
>>>> 
>>>> Thanks,
>>>> 
>>>> Abhi
>>> 
>> 
>> --
>> Alexander Alten-Lorenz
>> http://mapredit.blogspot.com
>> German Hadoop LinkedIn Group: http://goo.gl/N8pCF
>> 
> 


Re: error writing to collectorSink on mac os x

Posted by Alexander Alten-Lorenz <wg...@gmail.com>.
Hi,

Can you connect to your HDFS instance? Attach the config file for further debug.
Btw, per homebrew installed HDFS / Flume / whatever depends on the recipes homebrew uses. Check this first please.

Best
- Alex


On Nov 30, 2012, at 6:50 AM, Abhi Dubey <ab...@gmail.com> wrote:

>> Hi,
>> 
>> I am running hadoop in pseudo-distributed mode on mac os x 10.8.2. I installed flume (via homebrew) and I can write to console and local file system using text. Hadoop was also installed via homebrew.
>> 
>> But if I use collectorSink and either a file target file:/// or hdfs://, I get an error. The error is same for both types of targets.
>> 
>> 
>> 2012-11-29 20:22:47,182 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 6 failed, backoff (60000ms): failure to login
>> 2012-11-29 20:23:47,181 [pool-8-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202317290-0800.1354249397290310000.00000037
>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO debug.StubbornAppendSink: append failed on event 'new-host-2.home [INFO Thu Nov 29 20:21:44 PST 2012] #' with error: failure to login
>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: closing RollSink 'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
>> 2012-11-29 20:23:47,183 [logicalNode new-host-2.home-21] INFO rolling.RollSink: opening RollSink  'escapedCustomDfs("file:///Users/abhi/flume","testfile%{rolltag}" )'
>> 2012-11-29 20:23:47,184 [logicalNode new-host-2.home-21] INFO debug.InsistentOpenDecorator: Opened MaskDecorator on try 0
>> 2012-11-29 20:23:47,185 [pool-9-thread-1] INFO hdfs.EscapedCustomDfsSink: Opening file:///Users/abhi/flume/testfile20121129-202347184-0800.1354249427184178000.00000021
>> 2012-11-29 20:23:47,192 [logicalNode new-host-2.home-21] INFO debug.InsistentAppendDecorator: append attempt 7 failed, backoff (60000ms): failure to login
>> 
>> I am not sure what is going on. The permissions to the directory is 777. Can anyone help with this error?
>> 
>> 
>> Thanks,
>> 
>> Abhi
> 

--
Alexander Alten-Lorenz
http://mapredit.blogspot.com
German Hadoop LinkedIn Group: http://goo.gl/N8pCF