You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by Chalcy Raja <Ch...@careerbuilder.com> on 2012/01/24 19:16:26 UTC

flume windows node - 404 on localhost

Hello Flume users,

I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.

Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.

Also I tried to start the node from windows command line, I get a status of start pending.

Any help is appreciated.

Thanks,
Chalcy

[cid:image001.png@01CCDA9A.606803F0]



From: Arvind Prabhakar [mailto:arvind@apache.org]
Sent: Friday, January 13, 2012 12:26 PM
To: flume-user@incubator.apache.org
Subject: Re: Flume NG reliability and failover mechanisms

Hi Connolly,

Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp>> wrote:
Hi,

Coming into the new year we've been trying out flume NG, and run into
some questions. Tried to pick up what was possible from the javadoc and
source but pardon me if some of these are obvious.

1) Reading http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-flume-ng-2/
describes the reliability, but what happens if we lose a node?
1.1)Presumably the data stored in its channel is gone?

It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.

1.2) If we restart the node and the channel is a persisting one(file or
jdbc based),  will it then happily start feeding data into the sink?

Correct.


2) Is there some way to deliver data along multiple paths but make sure
it only gets persisted to a sink once? To avoid  loss of data to a dying
node.

We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.

2.1) Will there be stuff equivalent to the E2E mode of OG?

If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).

2.2) Anything else planned but further down along the horizon? Didn't
see much at https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+Cases
but that doesn't look very up to date.

Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.


3) Using the hdfs sink, we're getting tons of really small files. I
suspect this is related to append, and having a poke around the source,
it turns out that append is only used(by  if hdfs.append.support is set
to true. The hdfs-default.xml name for this variable is
dfs.support.append . Is this intentional? Should we be adding
hdfs.append.support manually to our config, or is there something else
going on here(regarding all the tiny files)?

(Leaving this for Prasad who did the implementation of HDFS sink)


Any help with these issues would be greatly appreciated.

Thanks,
Arvind


Re: flume windows node - 404 on localhost

Posted by alo alt <wg...@googlemail.com>.
Hi Chalcy,

simple download the tar.gz, decompress it and copy the *.war files into the webapps-dir of your installation. I wondering why the flumemaster.war is missing.

best,
 Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 25, 2012, at 3:29 PM, Chalcy Raja wrote:

> Hi Alex,
> 
> We use windows server 2008.  Yes, I did restart the process.  No error is showing up that is the frustrating part.
> 
> I am going to try some more too and see what we can get.  Thanks for trying.
> 
> --Chalcy
> 
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com] 
> Sent: Wednesday, January 25, 2012 3:06 AM
> To: flume-user@incubator.apache.org
> Subject: Re: flume windows node - 404 on localhost
> 
> Hi Chalcy,
> 
> also when you try to connect the *.jsp it shows no error? Did you restart the process (done over the service manager)?
> I will try today an test on a windows 7 vm, what version you use?
> 
> best,
> Alex 
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Jan 24, 2012, at 10:08 PM, Chalcy Raja wrote:
> 
>> Hi Alex,
>> 
>> The cleanup was done.
>> 
>> I did check the log files on the master.  It does not five any error.  Also when I run a flume config, I can see that in the master logs.  But the content is not showing up in either hdfs or file.
>> 
>> I did set the flume_home and flume_conf_dir appropriately.  One of the recent errors when trying a simple file copy is, file not found error.  
>> 
>> Thanks,
>> Chalcy
>> 
>> -----Original Message-----
>> From: alo alt [mailto:wget.null@googlemail.com]
>> Sent: Tuesday, January 24, 2012 3:09 PM
>> To: flume-user@incubator.apache.org
>> Subject: Re: flume windows node - 404 on localhost
>> 
>> should be the same version, did you clean up the directories before you downgraded? Could be a old .jar that makes trouble.
>> For the .94 error (404) please check the logs when you start the master-node. 
>> 
>> - Alex
>> 
>> 
>> --
>> Alexander Lorenz
>> http://mapredit.blogspot.com
>> 
>> On Jan 24, 2012, at 8:58 PM, Chalcy Raja wrote:
>> 
>>> 
>>> I had .9.4.  Now changing it to .9.3, brings up the agent page. I'll have to try the transmitting data part.  I have no luck so far.
>>> 
>>> Looks like only 0.9.4 is not working.  In the hadoop cluster we have
>>> CDH3u2 which comes with Flume 0.9.4
>>> 
>>> Is it okay to have different versions on master and nodes?
>>> 
>>> What version are you using?
>>> 
>>> Thanks,
>>> Chalcy
>>> 
>>> -----Original Message-----
>>> From: alo alt [mailto:wget.null@googlemail.com]
>>> Sent: Tuesday, January 24, 2012 2:16 PM
>>> To: flume-user@incubator.apache.org
>>> Subject: Re: flume windows node - 404 on localhost
>>> 
>>> what says the log file? 
>>> What version you installed?
>>> 
>>> did you commented out the flume-conf.xml and comment out the master-stuff. (pre NG).
>>> 
>>> - Alex
>>> 
>>> --
>>> Alexander Lorenz
>>> http://mapredit.blogspot.com
>>> 
>>> On Jan 24, 2012, at 8:08 PM, Chalcy Raja wrote:
>>> 
>>>> Hi Alex,
>>>> 
>>>> No luck.  I add that and also added the webapp where the war file is to the path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the url (http://localhost:35862/flumemaster.jsp).
>>>> 
>>>> Same 404.
>>>> 
>>>> Thanks,
>>>> Chalcy
>>>> 
>>>> -----Original Message-----
>>>> From: alo alt [mailto:wget.null@googlemail.com]
>>>> Sent: Tuesday, January 24, 2012 1:30 PM
>>>> To: flume-user@incubator.apache.org
>>>> Subject: Re: flume windows node - 404 on localhost
>>>> 
>>>> add flumemaster.jsp at the end.
>>>> 
>>>> - Alex
>>>> 
>>>> --
>>>> Alexander Lorenz
>>>> http://mapredit.blogspot.com
>>>> 
>>>> On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:
>>>> 
>>>>> Hello Flume users,
>>>>> 
>>>>> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>>>>> 
>>>>> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>>>>> 
>>>>> Also I tried to start the node from windows command line, I get a status of start pending.
>>>>> 
>>>>> Any help is appreciated.
>>>>> 
>>>>> Thanks,
>>>>> Chalcy
>>>>> 
>>>>> <image001.png>
>>>>> 
>>>>> 
>>>>> 
>>>>> From: Arvind Prabhakar [mailto:arvind@apache.org]
>>>>> Sent: Friday, January 13, 2012 12:26 PM
>>>>> To: flume-user@incubator.apache.org
>>>>> Subject: Re: Flume NG reliability and failover mechanisms
>>>>> 
>>>>> Hi Connolly,
>>>>> 
>>>>> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
>>>>> 
>>>>> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
>>>>> Hi,
>>>>> 
>>>>> Coming into the new year we've been trying out flume NG, and run 
>>>>> into some questions. Tried to pick up what was possible from the 
>>>>> javadoc and source but pardon me if some of these are obvious.
>>>>> 
>>>>> 1) Reading
>>>>> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-f
>>>>> l u m e-ng-2/ describes the reliability, but what happens if we 
>>>>> lose a node?
>>>>> 1.1)Presumably the data stored in its channel is gone?
>>>>> 
>>>>> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>>>>> 
>>>>> 1.2) If we restart the node and the channel is a persisting 
>>>>> one(file or jdbc based),  will it then happily start feeding data into the sink?
>>>>> 
>>>>> Correct.
>>>>> 
>>>>> 
>>>>> 2) Is there some way to deliver data along multiple paths but make 
>>>>> sure it only gets persisted to a sink once? To avoid  loss of data 
>>>>> to a dying node.
>>>>> 
>>>>> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>>>>> 
>>>>> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>>>>> 
>>>>> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>>>>> 
>>>>> 2.2) Anything else planned but further down along the horizon? 
>>>>> Didn't see much at
>>>>> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+
>>>>> C a s es but that doesn't look very up to date.
>>>>> 
>>>>> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>>>>> 
>>>>> 
>>>>> 3) Using the hdfs sink, we're getting tons of really small files. I 
>>>>> suspect this is related to append, and having a poke around the 
>>>>> source, it turns out that append is only used(by  if 
>>>>> hdfs.append.support is set to true. The hdfs-default.xml name for 
>>>>> this variable is dfs.support.append . Is this intentional? Should 
>>>>> we be adding hdfs.append.support manually to our config, or is 
>>>>> there something else going on here(regarding all the tiny files)?
>>>>> 
>>>>> (Leaving this for Prasad who did the implementation of HDFS sink)
>>>>> 
>>>>> 
>>>>> Any help with these issues would be greatly appreciated.
>>>>> 
>>>>> Thanks,
>>>>> Arvind
>>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 


RE: flume windows node - 404 on localhost

Posted by Chalcy Raja <Ch...@careerbuilder.com>.
Hi Alex,

We use windows server 2008.  Yes, I did restart the process.  No error is showing up that is the frustrating part.

I am going to try some more too and see what we can get.  Thanks for trying.

--Chalcy

-----Original Message-----
From: alo alt [mailto:wget.null@googlemail.com] 
Sent: Wednesday, January 25, 2012 3:06 AM
To: flume-user@incubator.apache.org
Subject: Re: flume windows node - 404 on localhost

Hi Chalcy,

also when you try to connect the *.jsp it shows no error? Did you restart the process (done over the service manager)?
I will try today an test on a windows 7 vm, what version you use?

best,
 Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 10:08 PM, Chalcy Raja wrote:

> Hi Alex,
> 
> The cleanup was done.
> 
> I did check the log files on the master.  It does not five any error.  Also when I run a flume config, I can see that in the master logs.  But the content is not showing up in either hdfs or file.
> 
> I did set the flume_home and flume_conf_dir appropriately.  One of the recent errors when trying a simple file copy is, file not found error.  
> 
> Thanks,
> Chalcy
> 
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com]
> Sent: Tuesday, January 24, 2012 3:09 PM
> To: flume-user@incubator.apache.org
> Subject: Re: flume windows node - 404 on localhost
> 
> should be the same version, did you clean up the directories before you downgraded? Could be a old .jar that makes trouble.
> For the .94 error (404) please check the logs when you start the master-node. 
> 
> - Alex
> 
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Jan 24, 2012, at 8:58 PM, Chalcy Raja wrote:
> 
>> 
>> I had .9.4.  Now changing it to .9.3, brings up the agent page. I'll have to try the transmitting data part.  I have no luck so far.
>> 
>> Looks like only 0.9.4 is not working.  In the hadoop cluster we have
>> CDH3u2 which comes with Flume 0.9.4
>> 
>> Is it okay to have different versions on master and nodes?
>> 
>> What version are you using?
>> 
>> Thanks,
>> Chalcy
>> 
>> -----Original Message-----
>> From: alo alt [mailto:wget.null@googlemail.com]
>> Sent: Tuesday, January 24, 2012 2:16 PM
>> To: flume-user@incubator.apache.org
>> Subject: Re: flume windows node - 404 on localhost
>> 
>> what says the log file? 
>> What version you installed?
>> 
>> did you commented out the flume-conf.xml and comment out the master-stuff. (pre NG).
>> 
>> - Alex
>> 
>> --
>> Alexander Lorenz
>> http://mapredit.blogspot.com
>> 
>> On Jan 24, 2012, at 8:08 PM, Chalcy Raja wrote:
>> 
>>> Hi Alex,
>>> 
>>> No luck.  I add that and also added the webapp where the war file is to the path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the url (http://localhost:35862/flumemaster.jsp).
>>> 
>>> Same 404.
>>> 
>>> Thanks,
>>> Chalcy
>>> 
>>> -----Original Message-----
>>> From: alo alt [mailto:wget.null@googlemail.com]
>>> Sent: Tuesday, January 24, 2012 1:30 PM
>>> To: flume-user@incubator.apache.org
>>> Subject: Re: flume windows node - 404 on localhost
>>> 
>>> add flumemaster.jsp at the end.
>>> 
>>> - Alex
>>> 
>>> --
>>> Alexander Lorenz
>>> http://mapredit.blogspot.com
>>> 
>>> On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:
>>> 
>>>> Hello Flume users,
>>>> 
>>>> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>>>> 
>>>> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>>>> 
>>>> Also I tried to start the node from windows command line, I get a status of start pending.
>>>> 
>>>> Any help is appreciated.
>>>> 
>>>> Thanks,
>>>> Chalcy
>>>> 
>>>> <image001.png>
>>>> 
>>>> 
>>>> 
>>>> From: Arvind Prabhakar [mailto:arvind@apache.org]
>>>> Sent: Friday, January 13, 2012 12:26 PM
>>>> To: flume-user@incubator.apache.org
>>>> Subject: Re: Flume NG reliability and failover mechanisms
>>>> 
>>>> Hi Connolly,
>>>> 
>>>> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
>>>> 
>>>> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
>>>> Hi,
>>>> 
>>>> Coming into the new year we've been trying out flume NG, and run 
>>>> into some questions. Tried to pick up what was possible from the 
>>>> javadoc and source but pardon me if some of these are obvious.
>>>> 
>>>> 1) Reading
>>>> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-f
>>>> l u m e-ng-2/ describes the reliability, but what happens if we 
>>>> lose a node?
>>>> 1.1)Presumably the data stored in its channel is gone?
>>>> 
>>>> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>>>> 
>>>> 1.2) If we restart the node and the channel is a persisting 
>>>> one(file or jdbc based),  will it then happily start feeding data into the sink?
>>>> 
>>>> Correct.
>>>> 
>>>> 
>>>> 2) Is there some way to deliver data along multiple paths but make 
>>>> sure it only gets persisted to a sink once? To avoid  loss of data 
>>>> to a dying node.
>>>> 
>>>> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>>>> 
>>>> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>>>> 
>>>> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>>>> 
>>>> 2.2) Anything else planned but further down along the horizon? 
>>>> Didn't see much at
>>>> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+
>>>> C a s es but that doesn't look very up to date.
>>>> 
>>>> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>>>> 
>>>> 
>>>> 3) Using the hdfs sink, we're getting tons of really small files. I 
>>>> suspect this is related to append, and having a poke around the 
>>>> source, it turns out that append is only used(by  if 
>>>> hdfs.append.support is set to true. The hdfs-default.xml name for 
>>>> this variable is dfs.support.append . Is this intentional? Should 
>>>> we be adding hdfs.append.support manually to our config, or is 
>>>> there something else going on here(regarding all the tiny files)?
>>>> 
>>>> (Leaving this for Prasad who did the implementation of HDFS sink)
>>>> 
>>>> 
>>>> Any help with these issues would be greatly appreciated.
>>>> 
>>>> Thanks,
>>>> Arvind
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 



Re: flume windows node - 404 on localhost

Posted by alo alt <wg...@googlemail.com>.
Hi Chalcy,

also when you try to connect the *.jsp it shows no error? Did you restart the process (done over the service manager)?
I will try today an test on a windows 7 vm, what version you use?

best,
 Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 10:08 PM, Chalcy Raja wrote:

> Hi Alex,
> 
> The cleanup was done.
> 
> I did check the log files on the master.  It does not five any error.  Also when I run a flume config, I can see that in the master logs.  But the content is not showing up in either hdfs or file.
> 
> I did set the flume_home and flume_conf_dir appropriately.  One of the recent errors when trying a simple file copy is, file not found error.  
> 
> Thanks,
> Chalcy
> 
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com] 
> Sent: Tuesday, January 24, 2012 3:09 PM
> To: flume-user@incubator.apache.org
> Subject: Re: flume windows node - 404 on localhost
> 
> should be the same version, did you clean up the directories before you downgraded? Could be a old .jar that makes trouble.
> For the .94 error (404) please check the logs when you start the master-node. 
> 
> - Alex 
> 
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Jan 24, 2012, at 8:58 PM, Chalcy Raja wrote:
> 
>> 
>> I had .9.4.  Now changing it to .9.3, brings up the agent page. I'll have to try the transmitting data part.  I have no luck so far.
>> 
>> Looks like only 0.9.4 is not working.  In the hadoop cluster we have 
>> CDH3u2 which comes with Flume 0.9.4
>> 
>> Is it okay to have different versions on master and nodes?
>> 
>> What version are you using?
>> 
>> Thanks,
>> Chalcy
>> 
>> -----Original Message-----
>> From: alo alt [mailto:wget.null@googlemail.com]
>> Sent: Tuesday, January 24, 2012 2:16 PM
>> To: flume-user@incubator.apache.org
>> Subject: Re: flume windows node - 404 on localhost
>> 
>> what says the log file? 
>> What version you installed?
>> 
>> did you commented out the flume-conf.xml and comment out the master-stuff. (pre NG).
>> 
>> - Alex
>> 
>> --
>> Alexander Lorenz
>> http://mapredit.blogspot.com
>> 
>> On Jan 24, 2012, at 8:08 PM, Chalcy Raja wrote:
>> 
>>> Hi Alex,
>>> 
>>> No luck.  I add that and also added the webapp where the war file is to the path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the url (http://localhost:35862/flumemaster.jsp).
>>> 
>>> Same 404.
>>> 
>>> Thanks,
>>> Chalcy
>>> 
>>> -----Original Message-----
>>> From: alo alt [mailto:wget.null@googlemail.com]
>>> Sent: Tuesday, January 24, 2012 1:30 PM
>>> To: flume-user@incubator.apache.org
>>> Subject: Re: flume windows node - 404 on localhost
>>> 
>>> add flumemaster.jsp at the end.
>>> 
>>> - Alex
>>> 
>>> --
>>> Alexander Lorenz
>>> http://mapredit.blogspot.com
>>> 
>>> On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:
>>> 
>>>> Hello Flume users,
>>>> 
>>>> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>>>> 
>>>> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>>>> 
>>>> Also I tried to start the node from windows command line, I get a status of start pending.
>>>> 
>>>> Any help is appreciated.
>>>> 
>>>> Thanks,
>>>> Chalcy
>>>> 
>>>> <image001.png>
>>>> 
>>>> 
>>>> 
>>>> From: Arvind Prabhakar [mailto:arvind@apache.org]
>>>> Sent: Friday, January 13, 2012 12:26 PM
>>>> To: flume-user@incubator.apache.org
>>>> Subject: Re: Flume NG reliability and failover mechanisms
>>>> 
>>>> Hi Connolly,
>>>> 
>>>> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
>>>> 
>>>> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
>>>> Hi,
>>>> 
>>>> Coming into the new year we've been trying out flume NG, and run 
>>>> into some questions. Tried to pick up what was possible from the 
>>>> javadoc and source but pardon me if some of these are obvious.
>>>> 
>>>> 1) Reading
>>>> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-fl
>>>> u m e-ng-2/ describes the reliability, but what happens if we lose a 
>>>> node?
>>>> 1.1)Presumably the data stored in its channel is gone?
>>>> 
>>>> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>>>> 
>>>> 1.2) If we restart the node and the channel is a persisting one(file 
>>>> or jdbc based),  will it then happily start feeding data into the sink?
>>>> 
>>>> Correct.
>>>> 
>>>> 
>>>> 2) Is there some way to deliver data along multiple paths but make 
>>>> sure it only gets persisted to a sink once? To avoid  loss of data 
>>>> to a dying node.
>>>> 
>>>> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>>>> 
>>>> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>>>> 
>>>> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>>>> 
>>>> 2.2) Anything else planned but further down along the horizon? 
>>>> Didn't see much at 
>>>> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+C
>>>> a s es but that doesn't look very up to date.
>>>> 
>>>> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>>>> 
>>>> 
>>>> 3) Using the hdfs sink, we're getting tons of really small files. I 
>>>> suspect this is related to append, and having a poke around the 
>>>> source, it turns out that append is only used(by  if 
>>>> hdfs.append.support is set to true. The hdfs-default.xml name for 
>>>> this variable is dfs.support.append . Is this intentional? Should we 
>>>> be adding hdfs.append.support manually to our config, or is there 
>>>> something else going on here(regarding all the tiny files)?
>>>> 
>>>> (Leaving this for Prasad who did the implementation of HDFS sink)
>>>> 
>>>> 
>>>> Any help with these issues would be greatly appreciated.
>>>> 
>>>> Thanks,
>>>> Arvind
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 


RE: flume windows node - 404 on localhost

Posted by Chalcy Raja <Ch...@careerbuilder.com>.
Hi Alex,

The cleanup was done.

I did check the log files on the master.  It does not five any error.  Also when I run a flume config, I can see that in the master logs.  But the content is not showing up in either hdfs or file.

I did set the flume_home and flume_conf_dir appropriately.  One of the recent errors when trying a simple file copy is, file not found error.  

Thanks,
Chalcy

-----Original Message-----
From: alo alt [mailto:wget.null@googlemail.com] 
Sent: Tuesday, January 24, 2012 3:09 PM
To: flume-user@incubator.apache.org
Subject: Re: flume windows node - 404 on localhost

should be the same version, did you clean up the directories before you downgraded? Could be a old .jar that makes trouble.
For the .94 error (404) please check the logs when you start the master-node. 

- Alex 


--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 8:58 PM, Chalcy Raja wrote:

> 
> I had .9.4.  Now changing it to .9.3, brings up the agent page. I'll have to try the transmitting data part.  I have no luck so far.
> 
> Looks like only 0.9.4 is not working.  In the hadoop cluster we have 
> CDH3u2 which comes with Flume 0.9.4
> 
> Is it okay to have different versions on master and nodes?
> 
> What version are you using?
> 
> Thanks,
> Chalcy
> 
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com]
> Sent: Tuesday, January 24, 2012 2:16 PM
> To: flume-user@incubator.apache.org
> Subject: Re: flume windows node - 404 on localhost
> 
> what says the log file? 
> What version you installed?
> 
> did you commented out the flume-conf.xml and comment out the master-stuff. (pre NG).
> 
> - Alex
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Jan 24, 2012, at 8:08 PM, Chalcy Raja wrote:
> 
>> Hi Alex,
>> 
>> No luck.  I add that and also added the webapp where the war file is to the path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the url (http://localhost:35862/flumemaster.jsp).
>> 
>> Same 404.
>> 
>> Thanks,
>> Chalcy
>> 
>> -----Original Message-----
>> From: alo alt [mailto:wget.null@googlemail.com]
>> Sent: Tuesday, January 24, 2012 1:30 PM
>> To: flume-user@incubator.apache.org
>> Subject: Re: flume windows node - 404 on localhost
>> 
>> add flumemaster.jsp at the end.
>> 
>> - Alex
>> 
>> --
>> Alexander Lorenz
>> http://mapredit.blogspot.com
>> 
>> On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:
>> 
>>> Hello Flume users,
>>> 
>>> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>>> 
>>> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>>> 
>>> Also I tried to start the node from windows command line, I get a status of start pending.
>>> 
>>> Any help is appreciated.
>>> 
>>> Thanks,
>>> Chalcy
>>> 
>>> <image001.png>
>>> 
>>> 
>>> 
>>> From: Arvind Prabhakar [mailto:arvind@apache.org]
>>> Sent: Friday, January 13, 2012 12:26 PM
>>> To: flume-user@incubator.apache.org
>>> Subject: Re: Flume NG reliability and failover mechanisms
>>> 
>>> Hi Connolly,
>>> 
>>> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
>>> 
>>> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
>>> Hi,
>>> 
>>> Coming into the new year we've been trying out flume NG, and run 
>>> into some questions. Tried to pick up what was possible from the 
>>> javadoc and source but pardon me if some of these are obvious.
>>> 
>>> 1) Reading
>>> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-fl
>>> u m e-ng-2/ describes the reliability, but what happens if we lose a 
>>> node?
>>> 1.1)Presumably the data stored in its channel is gone?
>>> 
>>> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>>> 
>>> 1.2) If we restart the node and the channel is a persisting one(file 
>>> or jdbc based),  will it then happily start feeding data into the sink?
>>> 
>>> Correct.
>>> 
>>> 
>>> 2) Is there some way to deliver data along multiple paths but make 
>>> sure it only gets persisted to a sink once? To avoid  loss of data 
>>> to a dying node.
>>> 
>>> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>>> 
>>> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>>> 
>>> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>>> 
>>> 2.2) Anything else planned but further down along the horizon? 
>>> Didn't see much at 
>>> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+C
>>> a s es but that doesn't look very up to date.
>>> 
>>> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>>> 
>>> 
>>> 3) Using the hdfs sink, we're getting tons of really small files. I 
>>> suspect this is related to append, and having a poke around the 
>>> source, it turns out that append is only used(by  if 
>>> hdfs.append.support is set to true. The hdfs-default.xml name for 
>>> this variable is dfs.support.append . Is this intentional? Should we 
>>> be adding hdfs.append.support manually to our config, or is there 
>>> something else going on here(regarding all the tiny files)?
>>> 
>>> (Leaving this for Prasad who did the implementation of HDFS sink)
>>> 
>>> 
>>> Any help with these issues would be greatly appreciated.
>>> 
>>> Thanks,
>>> Arvind
>>> 
>> 
>> 
> 
> 



Re: flume windows node - 404 on localhost

Posted by alo alt <wg...@googlemail.com>.
should be the same version, did you clean up the directories before you downgraded? Could be a old .jar that makes trouble.
For the .94 error (404) please check the logs when you start the master-node. 

- Alex 


--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 8:58 PM, Chalcy Raja wrote:

> 
> I had .9.4.  Now changing it to .9.3, brings up the agent page. I'll have to try the transmitting data part.  I have no luck so far.
> 
> Looks like only 0.9.4 is not working.  In the hadoop cluster we have CDH3u2 which comes with Flume 0.9.4
> 
> Is it okay to have different versions on master and nodes?
> 
> What version are you using?
> 
> Thanks,
> Chalcy
> 
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com] 
> Sent: Tuesday, January 24, 2012 2:16 PM
> To: flume-user@incubator.apache.org
> Subject: Re: flume windows node - 404 on localhost
> 
> what says the log file? 
> What version you installed?
> 
> did you commented out the flume-conf.xml and comment out the master-stuff. (pre NG).
> 
> - Alex 
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Jan 24, 2012, at 8:08 PM, Chalcy Raja wrote:
> 
>> Hi Alex,
>> 
>> No luck.  I add that and also added the webapp where the war file is to the path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the url (http://localhost:35862/flumemaster.jsp).
>> 
>> Same 404.
>> 
>> Thanks,
>> Chalcy
>> 
>> -----Original Message-----
>> From: alo alt [mailto:wget.null@googlemail.com]
>> Sent: Tuesday, January 24, 2012 1:30 PM
>> To: flume-user@incubator.apache.org
>> Subject: Re: flume windows node - 404 on localhost
>> 
>> add flumemaster.jsp at the end.
>> 
>> - Alex
>> 
>> --
>> Alexander Lorenz
>> http://mapredit.blogspot.com
>> 
>> On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:
>> 
>>> Hello Flume users,
>>> 
>>> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>>> 
>>> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>>> 
>>> Also I tried to start the node from windows command line, I get a status of start pending.
>>> 
>>> Any help is appreciated.
>>> 
>>> Thanks,
>>> Chalcy
>>> 
>>> <image001.png>
>>> 
>>> 
>>> 
>>> From: Arvind Prabhakar [mailto:arvind@apache.org]
>>> Sent: Friday, January 13, 2012 12:26 PM
>>> To: flume-user@incubator.apache.org
>>> Subject: Re: Flume NG reliability and failover mechanisms
>>> 
>>> Hi Connolly,
>>> 
>>> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
>>> 
>>> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
>>> Hi,
>>> 
>>> Coming into the new year we've been trying out flume NG, and run into 
>>> some questions. Tried to pick up what was possible from the javadoc 
>>> and source but pardon me if some of these are obvious.
>>> 
>>> 1) Reading
>>> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-flu
>>> m e-ng-2/ describes the reliability, but what happens if we lose a 
>>> node?
>>> 1.1)Presumably the data stored in its channel is gone?
>>> 
>>> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>>> 
>>> 1.2) If we restart the node and the channel is a persisting one(file 
>>> or jdbc based),  will it then happily start feeding data into the sink?
>>> 
>>> Correct.
>>> 
>>> 
>>> 2) Is there some way to deliver data along multiple paths but make 
>>> sure it only gets persisted to a sink once? To avoid  loss of data to 
>>> a dying node.
>>> 
>>> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>>> 
>>> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>>> 
>>> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>>> 
>>> 2.2) Anything else planned but further down along the horizon? Didn't 
>>> see much at 
>>> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+Ca
>>> s es but that doesn't look very up to date.
>>> 
>>> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>>> 
>>> 
>>> 3) Using the hdfs sink, we're getting tons of really small files. I 
>>> suspect this is related to append, and having a poke around the 
>>> source, it turns out that append is only used(by  if 
>>> hdfs.append.support is set to true. The hdfs-default.xml name for 
>>> this variable is dfs.support.append . Is this intentional? Should we 
>>> be adding hdfs.append.support manually to our config, or is there 
>>> something else going on here(regarding all the tiny files)?
>>> 
>>> (Leaving this for Prasad who did the implementation of HDFS sink)
>>> 
>>> 
>>> Any help with these issues would be greatly appreciated.
>>> 
>>> Thanks,
>>> Arvind
>>> 
>> 
>> 
> 
> 


RE: flume windows node - 404 on localhost

Posted by Chalcy Raja <Ch...@careerbuilder.com>.
I had .9.4.  Now changing it to .9.3, brings up the agent page. I'll have to try the transmitting data part.  I have no luck so far.

Looks like only 0.9.4 is not working.  In the hadoop cluster we have CDH3u2 which comes with Flume 0.9.4

Is it okay to have different versions on master and nodes?

What version are you using?

Thanks,
Chalcy

-----Original Message-----
From: alo alt [mailto:wget.null@googlemail.com] 
Sent: Tuesday, January 24, 2012 2:16 PM
To: flume-user@incubator.apache.org
Subject: Re: flume windows node - 404 on localhost

what says the log file? 
What version you installed?

did you commented out the flume-conf.xml and comment out the master-stuff. (pre NG).

- Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 8:08 PM, Chalcy Raja wrote:

> Hi Alex,
> 
> No luck.  I add that and also added the webapp where the war file is to the path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the url (http://localhost:35862/flumemaster.jsp).
> 
> Same 404.
> 
> Thanks,
> Chalcy
> 
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com]
> Sent: Tuesday, January 24, 2012 1:30 PM
> To: flume-user@incubator.apache.org
> Subject: Re: flume windows node - 404 on localhost
> 
> add flumemaster.jsp at the end.
> 
> - Alex
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:
> 
>> Hello Flume users,
>> 
>> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>> 
>> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>> 
>> Also I tried to start the node from windows command line, I get a status of start pending.
>> 
>> Any help is appreciated.
>> 
>> Thanks,
>> Chalcy
>> 
>> <image001.png>
>> 
>> 
>> 
>> From: Arvind Prabhakar [mailto:arvind@apache.org]
>> Sent: Friday, January 13, 2012 12:26 PM
>> To: flume-user@incubator.apache.org
>> Subject: Re: Flume NG reliability and failover mechanisms
>> 
>> Hi Connolly,
>> 
>> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
>> 
>> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
>> Hi,
>> 
>> Coming into the new year we've been trying out flume NG, and run into 
>> some questions. Tried to pick up what was possible from the javadoc 
>> and source but pardon me if some of these are obvious.
>> 
>> 1) Reading
>> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-flu
>> m e-ng-2/ describes the reliability, but what happens if we lose a 
>> node?
>> 1.1)Presumably the data stored in its channel is gone?
>> 
>> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>> 
>> 1.2) If we restart the node and the channel is a persisting one(file 
>> or jdbc based),  will it then happily start feeding data into the sink?
>> 
>> Correct.
>> 
>> 
>> 2) Is there some way to deliver data along multiple paths but make 
>> sure it only gets persisted to a sink once? To avoid  loss of data to 
>> a dying node.
>> 
>> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>> 
>> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>> 
>> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>> 
>> 2.2) Anything else planned but further down along the horizon? Didn't 
>> see much at 
>> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+Ca
>> s es but that doesn't look very up to date.
>> 
>> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>> 
>> 
>> 3) Using the hdfs sink, we're getting tons of really small files. I 
>> suspect this is related to append, and having a poke around the 
>> source, it turns out that append is only used(by  if 
>> hdfs.append.support is set to true. The hdfs-default.xml name for 
>> this variable is dfs.support.append . Is this intentional? Should we 
>> be adding hdfs.append.support manually to our config, or is there 
>> something else going on here(regarding all the tiny files)?
>> 
>> (Leaving this for Prasad who did the implementation of HDFS sink)
>> 
>> 
>> Any help with these issues would be greatly appreciated.
>> 
>> Thanks,
>> Arvind
>> 
> 
> 



Re: flume windows node - 404 on localhost

Posted by alo alt <wg...@googlemail.com>.
what says the log file? 
What version you installed?

did you commented out the flume-conf.xml and comment out the master-stuff. (pre NG).

- Alex 

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 8:08 PM, Chalcy Raja wrote:

> Hi Alex,
> 
> No luck.  I add that and also added the webapp where the war file is to the path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the url (http://localhost:35862/flumemaster.jsp).
> 
> Same 404.
> 
> Thanks,
> Chalcy
> 
> -----Original Message-----
> From: alo alt [mailto:wget.null@googlemail.com] 
> Sent: Tuesday, January 24, 2012 1:30 PM
> To: flume-user@incubator.apache.org
> Subject: Re: flume windows node - 404 on localhost
> 
> add flumemaster.jsp at the end.
> 
> - Alex
> 
> --
> Alexander Lorenz
> http://mapredit.blogspot.com
> 
> On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:
> 
>> Hello Flume users,
>> 
>> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>> 
>> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>> 
>> Also I tried to start the node from windows command line, I get a status of start pending.
>> 
>> Any help is appreciated.
>> 
>> Thanks,
>> Chalcy
>> 
>> <image001.png>
>> 
>> 
>> 
>> From: Arvind Prabhakar [mailto:arvind@apache.org]
>> Sent: Friday, January 13, 2012 12:26 PM
>> To: flume-user@incubator.apache.org
>> Subject: Re: Flume NG reliability and failover mechanisms
>> 
>> Hi Connolly,
>> 
>> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
>> 
>> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
>> Hi,
>> 
>> Coming into the new year we've been trying out flume NG, and run into 
>> some questions. Tried to pick up what was possible from the javadoc 
>> and source but pardon me if some of these are obvious.
>> 
>> 1) Reading 
>> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-flum
>> e-ng-2/ describes the reliability, but what happens if we lose a node?
>> 1.1)Presumably the data stored in its channel is gone?
>> 
>> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>> 
>> 1.2) If we restart the node and the channel is a persisting one(file 
>> or jdbc based),  will it then happily start feeding data into the sink?
>> 
>> Correct.
>> 
>> 
>> 2) Is there some way to deliver data along multiple paths but make 
>> sure it only gets persisted to a sink once? To avoid  loss of data to 
>> a dying node.
>> 
>> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>> 
>> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>> 
>> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>> 
>> 2.2) Anything else planned but further down along the horizon? Didn't 
>> see much at 
>> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+Cas
>> es but that doesn't look very up to date.
>> 
>> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>> 
>> 
>> 3) Using the hdfs sink, we're getting tons of really small files. I 
>> suspect this is related to append, and having a poke around the 
>> source, it turns out that append is only used(by  if 
>> hdfs.append.support is set to true. The hdfs-default.xml name for this 
>> variable is dfs.support.append . Is this intentional? Should we be 
>> adding hdfs.append.support manually to our config, or is there 
>> something else going on here(regarding all the tiny files)?
>> 
>> (Leaving this for Prasad who did the implementation of HDFS sink)
>> 
>> 
>> Any help with these issues would be greatly appreciated.
>> 
>> Thanks,
>> Arvind
>> 
> 
> 


RE: flume windows node - 404 on localhost

Posted by Chalcy Raja <Ch...@careerbuilder.com>.
Hi Alex,

No luck.  I add that and also added the webapp where the war file is to the path.  Tried to add both flumemaster.jsp and flumeagent.jsp to the end of the url (http://localhost:35862/flumemaster.jsp).

Same 404.

Thanks,
Chalcy

-----Original Message-----
From: alo alt [mailto:wget.null@googlemail.com] 
Sent: Tuesday, January 24, 2012 1:30 PM
To: flume-user@incubator.apache.org
Subject: Re: flume windows node - 404 on localhost

add flumemaster.jsp at the end.

- Alex

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:

> Hello Flume users,
>  
> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>  
> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>  
> Also I tried to start the node from windows command line, I get a status of start pending.
>  
> Any help is appreciated.
>  
> Thanks,
> Chalcy
>  
> <image001.png>
>  
>  
>  
> From: Arvind Prabhakar [mailto:arvind@apache.org]
> Sent: Friday, January 13, 2012 12:26 PM
> To: flume-user@incubator.apache.org
> Subject: Re: Flume NG reliability and failover mechanisms
>  
> Hi Connolly,
>  
> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
> 
> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
> Hi,
> 
> Coming into the new year we've been trying out flume NG, and run into 
> some questions. Tried to pick up what was possible from the javadoc 
> and source but pardon me if some of these are obvious.
> 
> 1) Reading 
> http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-flum
> e-ng-2/ describes the reliability, but what happens if we lose a node?
> 1.1)Presumably the data stored in its channel is gone?
>  
> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>  
> 1.2) If we restart the node and the channel is a persisting one(file 
> or jdbc based),  will it then happily start feeding data into the sink?
>  
> Correct.
>  
> 
> 2) Is there some way to deliver data along multiple paths but make 
> sure it only gets persisted to a sink once? To avoid  loss of data to 
> a dying node.
>  
> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>  
> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>  
> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>  
> 2.2) Anything else planned but further down along the horizon? Didn't 
> see much at 
> https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+Cas
> es but that doesn't look very up to date.
>  
> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>  
> 
> 3) Using the hdfs sink, we're getting tons of really small files. I 
> suspect this is related to append, and having a poke around the 
> source, it turns out that append is only used(by  if 
> hdfs.append.support is set to true. The hdfs-default.xml name for this 
> variable is dfs.support.append . Is this intentional? Should we be 
> adding hdfs.append.support manually to our config, or is there 
> something else going on here(regarding all the tiny files)?
>  
> (Leaving this for Prasad who did the implementation of HDFS sink)
>  
> 
> Any help with these issues would be greatly appreciated.
>  
> Thanks,
> Arvind
>  



Re: flume windows node - 404 on localhost

Posted by alo alt <wg...@googlemail.com>.
add flumemaster.jsp at the end.

- Alex

--
Alexander Lorenz
http://mapredit.blogspot.com

On Jan 24, 2012, at 7:16 PM, Chalcy Raja wrote:

> Hello Flume users,
>  
> I am new to flume.  I have set up successfully a master and a node on two linux virtual machines and they are collecting logs as expected.
>  
> Now I am trying to set up a windows flume node, followed the installation guide etc., I could successfully run the node as a service.  When I go to the port 35862, I get 404 like below.
>  
> Also I tried to start the node from windows command line, I get a status of start pending.
>  
> Any help is appreciated.
>  
> Thanks,
> Chalcy
>  
> <image001.png>
>  
>  
>  
> From: Arvind Prabhakar [mailto:arvind@apache.org] 
> Sent: Friday, January 13, 2012 12:26 PM
> To: flume-user@incubator.apache.org
> Subject: Re: Flume NG reliability and failover mechanisms
>  
> Hi Connolly, 
>  
> Thanks for taking time to evaluate Flume NG. Please see my comments inline below:
> 
> On Thu, Jan 12, 2012 at 5:17 PM, Connolly Juhani <ju...@cyberagent.co.jp> wrote:
> Hi,
> 
> Coming into the new year we've been trying out flume NG, and run into
> some questions. Tried to pick up what was possible from the javadoc and
> source but pardon me if some of these are obvious.
> 
> 1) Reading http://www.cloudera.com/blog/2011/12/apache-flume-architecture-of-flume-ng-2/
> describes the reliability, but what happens if we lose a node?
> 1.1)Presumably the data stored in its channel is gone?
>  
> It depends upon the kind of channel you have. If you use a memory channel, the data will be gone. If you use file channel the data will be available. If you use JDBC channel, it is guaranteed to be available.
>  
> 1.2) If we restart the node and the channel is a persisting one(file or
> jdbc based),  will it then happily start feeding data into the sink?
>  
> Correct.
>  
> 
> 2) Is there some way to deliver data along multiple paths but make sure
> it only gets persisted to a sink once? To avoid  loss of data to a dying
> node.
>  
> We have talked about fail-over sink implementations. Although we don't have it implemented yet, we do intend to provide these faciliteis.
>  
> 2.1) Will there be stuff equivalent to the E2E mode of OG?
>  
> If you mean end-to-end reliable delivery guarantee, Flume NG already provides that. You can get this by configuring your flow with reliable channels (JDBC).
>  
> 2.2) Anything else planned but further down along the horizon? Didn't
> see much at https://cwiki.apache.org/confluence/display/FLUME/Features+and+Use+Cases
> but that doesn't look very up to date.
>  
> Most of the discussion is now moved to JIRA and dev-list. Features such as channel multiplexing from same source, compatible source implementation for hybrid installation of previous version of Flume and NG together, event prioritization have been discussed among many others. As and when resources permit, we will be addressing these going forward.
>  
> 
> 3) Using the hdfs sink, we're getting tons of really small files. I
> suspect this is related to append, and having a poke around the source,
> it turns out that append is only used(by  if hdfs.append.support is set
> to true. The hdfs-default.xml name for this variable is
> dfs.support.append . Is this intentional? Should we be adding
> hdfs.append.support manually to our config, or is there something else
> going on here(regarding all the tiny files)?
>  
> (Leaving this for Prasad who did the implementation of HDFS sink)
>  
> 
> Any help with these issues would be greatly appreciated.
>  
> Thanks,
> Arvind 
>