You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by Nishu <ni...@gmail.com> on 2014/06/18 09:24:14 UTC

How to update a running Storm Topology

Hi,

I need to update a running topology on some user action.
So I have used following way in the topology : Killing the running topology
and again starting it

               NimbusClient client = NimbusClient.getConfiguredClient(conf);
>  try {
> ClusterSummary summary = client.getClient().getClusterInfo();
> for (TopologySummary s : summary.get_topologies()) {
>  if (s.get_name().equals(name)) {
>                                        client.killTopology(name);
> return true;
>  }
> }
> return false;
> }
>                StormSubmitter.submitTopology(topologyName,
> conf,builder.createTopology());



What happens here is : Topology is killed by the client, but when
Stormsubmitter submits the same topology, it gets exception for duplicate
topology name like below:

Exception in thread "main" java.lang.RuntimeException: Topology with name
> `test_topology` already exists on cluster
>         at
> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:89)
>         at
> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:58)
>         at com.cts.TestTopology.main(TestTopology.java:131)



What should I use to overcome this problem? or is there another way to
update the running topology?

Thanks,
Nishu Tayal

Re: How to update a running Storm Topology

Posted by Nishu <ni...@gmail.com>.
Thanks Nathan, it worked out..!!!!


On Wed, Jun 18, 2014 at 6:23 PM, Nathan Leung <nc...@gmail.com> wrote:

> If you kill a topology in the ui, you will notice that sometimes it takes
> awhile for it to clear and go away. If you try to reload the topology
> during this time you will get the same exception. You should loop checking
> the nimbus for this topology after you kill it, and only reload after you
> detect that it has gone away.
>  On Jun 18, 2014 3:25 AM, "Nishu" <ni...@gmail.com> wrote:
>
>> Hi,
>>
>> I need to update a running topology on some user action.
>> So I have used following way in the topology : Killing the running
>> topology and again starting it
>>
>>                NimbusClient client =
>>> NimbusClient.getConfiguredClient(conf);
>>>  try {
>>> ClusterSummary summary = client.getClient().getClusterInfo();
>>> for (TopologySummary s : summary.get_topologies()) {
>>>  if (s.get_name().equals(name)) {
>>>                                        client.killTopology(name);
>>> return true;
>>>  }
>>> }
>>> return false;
>>> }
>>>                StormSubmitter.submitTopology(topologyName,
>>> conf,builder.createTopology());
>>
>>
>>
>> What happens here is : Topology is killed by the client, but when
>> Stormsubmitter submits the same topology, it gets exception for duplicate
>> topology name like below:
>>
>> Exception in thread "main" java.lang.RuntimeException: Topology with name
>>> `test_topology` already exists on cluster
>>>         at
>>> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:89)
>>>         at
>>> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:58)
>>>         at com.cts.TestTopology.main(TestTopology.java:131)
>>
>>
>>
>> What should I use to overcome this problem? or is there another way to
>> update the running topology?
>>
>> Thanks,
>> Nishu Tayal
>>
>


-- 
with regards,
Nishu Tayal

Re: How to update a running Storm Topology

Posted by Nathan Leung <nc...@gmail.com>.
If you kill a topology in the ui, you will notice that sometimes it takes
awhile for it to clear and go away. If you try to reload the topology
during this time you will get the same exception. You should loop checking
the nimbus for this topology after you kill it, and only reload after you
detect that it has gone away.
On Jun 18, 2014 3:25 AM, "Nishu" <ni...@gmail.com> wrote:

> Hi,
>
> I need to update a running topology on some user action.
> So I have used following way in the topology : Killing the running
> topology and again starting it
>
>                NimbusClient client =
>> NimbusClient.getConfiguredClient(conf);
>>  try {
>> ClusterSummary summary = client.getClient().getClusterInfo();
>> for (TopologySummary s : summary.get_topologies()) {
>>  if (s.get_name().equals(name)) {
>>                                        client.killTopology(name);
>> return true;
>>  }
>> }
>> return false;
>> }
>>                StormSubmitter.submitTopology(topologyName,
>> conf,builder.createTopology());
>
>
>
> What happens here is : Topology is killed by the client, but when
> Stormsubmitter submits the same topology, it gets exception for duplicate
> topology name like below:
>
> Exception in thread "main" java.lang.RuntimeException: Topology with name
>> `test_topology` already exists on cluster
>>         at
>> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:89)
>>         at
>> backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:58)
>>         at com.cts.TestTopology.main(TestTopology.java:131)
>
>
>
> What should I use to overcome this problem? or is there another way to
> update the running topology?
>
> Thanks,
> Nishu Tayal
>

Re: Can I not use tuple timeout?

Posted by ja...@yahoo.com.tw.
Hi,
You are correct. You can set time-out longer by setting config. However, if you don't want to use ack feature, you can set number of acker to 0. By this way, your topology is running on an unreliable way. In other words, all of your tuple will not be tracked.

Best regards,
James Fu


> 이승진 <sw...@navercorp.com> 於 2014/6/18 下午5:29 寫道:
> 
> Dear all,
> 
>  
> 
> AFAIK, spout 'assumes' processing a tuple is failed when it does not accept ack in timeout interval.
> 
>  
> 
> But I found that even if spout's fail method is called, that tuple is still running through topology i.e other bolts are still processing that tuple.
> 
>  
> 
> So actually failed tuple is not failed but just delayed.
> 
>  
> 
> I think I can configure timeout value bigger, but I want to know if there's a way to avoid using spout's timeout 
> 
>  
> 
> Sincerly
> 
>  
> 
> 

Can I not use tuple timeout?

Posted by 이승진 <sw...@navercorp.com>.
Dear all,
 
AFAIK, spout 'assumes' processing a tuple is failed when it does not accept ack in timeout interval.
 
But I found that even if spout's fail method is called, that tuple is still running through topology i.e other bolts are still processing that tuple.
 
So actually failed tuple is not failed but just delayed.
 
I think I can configure timeout value bigger, but I want to know if there's a way to avoid using spout's timeout 
 
Sincerly