You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flume.apache.org by 基勇 <25...@qq.com> on 2014/08/14 10:22:10 UTC

回复: flume failover only support two nodes?

I know!
Thank you !Jeff Lord and ‍Hari Shreedharan!‍




------------------ 原始邮件 ------------------
发件人: "Jeff Lord";<jl...@cloudera.com>;
发送时间: 2014年8月14日(星期四) 中午12:16
收件人: "user@flume.apache.org"<us...@flume.apache.org>; 

主题: Re: flume failover only support two nodes?



Also all of your sinks are pointing to the same host for the next hop.So if the agent on that host is unavailable for some reason than failover is pointless.
For testing this ok, for production there is a better way.
 
On Wednesday, August 13, 2014, Hari Shreedharan <hs...@cloudera.com> wrote:
 Each sink needs to have a different priority. If multiple sinks have same priority, only one of them will be used.
 
 基勇 wrote:
 
 Hello,guys
 I tested flume failover feature found failover support only two nodes, 
 may I ask whether it is so?‍
 
 config file:
 storm@storm01:~/apache-flume-1.5.0-bin/conf$ more flume-sink.properties
 #Name the compents on this agent
 a1.sources = r1
 a1.sinks = k1 k2 k3 k4
 a1.channels = c1
 
 #Describe the sinkgroups
 a1.sinkgroups = g1 g2
 a1.sinkgroups.g1.sinks = k1 k2 k3 k4
 a1.sinkgroups.g1.processor.type = failover
 a1.sinkgroups.g1.processor.priority.k1 = 10
 a1.sinkgroups.g1.processor.priority.k3 = 10
 a1.sinkgroups.g1.processor.priority.k4 = 10
 a1.sinkgroups.g1.processor.priority.k2 = 5
 a1.sinkgroups.g1.processor.maxpenalty = 10000
 
 #a1.sinkgroups.g2.sinks = k3 k4
 #a1.sinkgroups.g2.processor.type = load_balance
 #a1.sinkgroups.g2.processor.backoff = true
 #a1.sinkgroups.g2.processor.selector = round_robin
 
 #Describe/config the source
 a1.sources.r1.type = syslogtcp
 a1.sources.r1.port = 5140
 a1.sources.r1.host = localhost
 a1.sources.r1.channels = c1
 
 #Describe the sink
 a1.sinks.k1.type = avro
 a1.sinks.k1.channel = c1
 a1.sinks.k1.hostname = 192.168.220.159
 a1.sinks.k1.port = 44411
 
 a1.sinks.k2.type = avro
 a1.sinks.k2.channel = c1
 a1.sinks.k2.hostname = 192.168.220.159
 a1.sinks.k2.port = 44422
 
 a1.sinks.k3.type = avro
 a1.sinks.k3.channel = c1
 a1.sinks.k3.hostname = 192.168.220.159
 a1.sinks.k3.port = 44433
 
 a1.sinks.k4.type = avro
 a1.sinks.k4.channel = c1
 a1.sinks.k4.hostname = 192.168.220.159
 a1.sinks.k4.port = 44444
 #Use a channel which butters events in memory
 a1.channels.c1.type = memory
 a1.channels.c1.capacity = 1000
 a1.channels.c1.transactionCapacity = 100‍
 
 only k3 and k2 node received ‍the data.
 i stop k2 k3. k1 k4 can receive the data;
 
 exception infomation:
 2014-07-08 08:10:06,403 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [ERROR - 
 org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] 
 Unable to deliver event. Exception follows.
 org.apache.flume.EventDeliveryException: All sinks failed to process, 
 nothing left to failover to
 at 
 org.apache.flume.sink.FailoverSinkProcessor.process(FailoverSinkProcessor.java:191)
 at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
 at java.lang.Thread.run(Thread.java:745)
 2014-07-08 08:10:11,408 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [INFO - 
 org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:206)] 
 Rpc sink k2: Building RpcClient with hostname: 192.168.220.159, port: 
 44422
 2014-07-08 08:10:11,408 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [INFO - 
 org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:126)] 
 Attempting to create Avro Rpc client.
 2014-07-08 08:10:11,408 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [WARN - 
 org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:620)] 
 Using default maxIOWorkers
 2014-07-08 08:10:11,417 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [ERROR - 
 org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] 
 Unable to deliver event. Exception follows.
 org.apache.flume.EventDeliveryException: All sinks failed to process, 
 nothing left to failover to
 at 
 org.apache.flume.sink.FailoverSinkProcessor.process(FailoverSinkProcessor.java:191)
 at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
 at java.lang.Thread.run(Thread.java:745)
 2014-07-08 08:10:16,422 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [INFO - 
 org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:206)] 
 Rpc sink k3: Building RpcClient with hostname: 192.168.220.159, port: 
 44433
 2014-07-08 08:10:16,423 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [INFO - 
 org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:126)] 
 Attempting to create Avro Rpc client.
 2014-07-08 08:10:16,425 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [WARN - 
 org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:620)] 
 Using default maxIOWorkers
 2014-07-08 08:10:16,431 
 (SinkRunner-PollingRunner-FailoverSinkProcessor) [ERROR - 
 org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] 
 Unable to deliver event. Exception follows.
 org.apache.flume.EventDeliveryException: All sinks failed to process, 
 nothing left to failover to
 at 
 org.apache.flume.sink.FailoverSinkProcessor.process(FailoverSinkProcessor.java:191)
 at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
 at java.lang.Thread.run(Thread.java:745)‍