You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by Vineet Mishra <cl...@gmail.com> on 2015/01/27 14:24:58 UTC

Storm Job picking cached/old jar file

Hi All,

I am stucked at a vague issue. I am having a 3 node Storm
Cluster(apache-storm-0.9.3) with configuration given below,

node1 - Nimbus, UI
node2 - Supervisor, Worker
node3 - Supervisor, Worker

I have written a topology and I was running it through storm, but soon
after I made some changes in Bolts, created a fresh jar, redeployed it and
ran again with the Storm jar command, but it still seems to be referring
the old jar somewhere cached(although I have already deleted that old jar).

I am seriously stucked at this issue, tried looking at some post relating
the same issue but couldn't find any sufficing answer.

Looking for expert comments.

Thanks in advance!

Re: Storm Job picking cached/old jar file

Posted by Vineet Mishra <cl...@gmail.com>.
Thanks Vincent.

Appreciate your kind suggestion, it worked like a charm! :)

On Wed, Jan 28, 2015 at 7:00 PM, Vincent Russell <vi...@gmail.com>
wrote:

>
> You can bundle all of your dependencies into your topology jar using the
> maven shade plugin or uberjar with lein.
>
> On Wed, Jan 28, 2015 at 7:24 AM, Vineet Mishra <cl...@gmail.com>
> wrote:
>
>> Well thanks all, I got it working, it seems that the topology jar itself
>> was having the topology in the build path, the reason of which it was
>> referring the old code. I got it working but only in local mode.
>>
>> Moreover I was looking out a way as how to invoke job in distributed
>> mode, what currently I was doing was copying the dependencies(kafka and
>> other few possible) to Storm lib folder and Storm was picking up those
>> dependencies while running, but in distributed mode
>>
>> 1) I am not sure is it a good way to put the external dependencies in the
>> Storm lib folder
>> 2) If its not a way, then how should we schedule the job in distributed
>> mode
>>
>> Looking out for quick response.
>>
>> Thanks in advance!
>>
>> On Wed, Jan 28, 2015 at 11:49 AM, Vineet Mishra <cl...@gmail.com>
>> wrote:
>>
>>> Hi Jens,
>>>
>>> No its not referring to the old jars in the log(bcoz that has been
>>> already deleted) rather its picking the changed jar but more over the fun
>>> fact is that even storm is picking up new jar but still running the old
>>> code(as I can't see the new code running up) in my topology.
>>>
>>> Thanks!
>>>
>>> On Tue, Jan 27, 2015 at 11:07 PM, Jens-Uwe Mozdzen <jm...@nde.ag>
>>> wrote:
>>>
>>>> Hi Vineet,
>>>>
>>>> Zitat von Vineet Mishra <cl...@gmail.com>:
>>>>
>>>>> Hi Naresh and Jens,
>>>>>
>>>>> Well first I tried running a job in local mode that was running good,
>>>>> but I
>>>>> wanted to run it in distributed environment,
>>>>> later I killed the job(Ctrl+c) and rebuild the jar with some addones
>>>>> in the
>>>>> bolts and made it to run in a production mode
>>>>> by StormSubmitter.submitTopology(.,.,.).
>>>>>
>>>>> I am not sure what was the reason that the job was not able to run as I
>>>>> changed the build but soon after cluster restart the same distributed
>>>>> job
>>>>> started running.
>>>>>
>>>>> Now if I am killing the existing running job via terminal or via storm
>>>>> UI
>>>>> it kills pretty fine but Storm is over and over referencing the same
>>>>> old
>>>>> jar even thought I am making a fresh build.
>>>>>
>>>>
>>>> sorry for repeating myself - *where* is it referencing the same old
>>>> jar... in the log files?
>>>>
>>>> If you're only judging by the functionality/behavior, it might be an
>>>> issue of unchanged serialVersionUIDs and cached classes...
>>>>
>>>> Regards,
>>>> Jens
>>>> --
>>>> Jens-U. Mozdzen                         voice   : +49-40-559 51 75
>>>> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
>>>> Postfach 61 03 15                       mobile  : +49-179-4 98 21 98
>>>> D-22423 Hamburg                         e-mail  : jmozdzen@nde.ag
>>>>
>>>>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>>>           Sitz und Registergericht: Hamburg, HRB 90934
>>>>                   Vorstand: Jens-U. Mozdzen
>>>>                    USt-IdNr. DE 814 013 983
>>>>
>>>>
>>>
>>
>

Re: Storm Job picking cached/old jar file

Posted by Vincent Russell <vi...@gmail.com>.
You can bundle all of your dependencies into your topology jar using the
maven shade plugin or uberjar with lein.

On Wed, Jan 28, 2015 at 7:24 AM, Vineet Mishra <cl...@gmail.com>
wrote:

> Well thanks all, I got it working, it seems that the topology jar itself
> was having the topology in the build path, the reason of which it was
> referring the old code. I got it working but only in local mode.
>
> Moreover I was looking out a way as how to invoke job in distributed mode,
> what currently I was doing was copying the dependencies(kafka and other few
> possible) to Storm lib folder and Storm was picking up those dependencies
> while running, but in distributed mode
>
> 1) I am not sure is it a good way to put the external dependencies in the
> Storm lib folder
> 2) If its not a way, then how should we schedule the job in distributed
> mode
>
> Looking out for quick response.
>
> Thanks in advance!
>
> On Wed, Jan 28, 2015 at 11:49 AM, Vineet Mishra <cl...@gmail.com>
> wrote:
>
>> Hi Jens,
>>
>> No its not referring to the old jars in the log(bcoz that has been
>> already deleted) rather its picking the changed jar but more over the fun
>> fact is that even storm is picking up new jar but still running the old
>> code(as I can't see the new code running up) in my topology.
>>
>> Thanks!
>>
>> On Tue, Jan 27, 2015 at 11:07 PM, Jens-Uwe Mozdzen <jm...@nde.ag>
>> wrote:
>>
>>> Hi Vineet,
>>>
>>> Zitat von Vineet Mishra <cl...@gmail.com>:
>>>
>>>> Hi Naresh and Jens,
>>>>
>>>> Well first I tried running a job in local mode that was running good,
>>>> but I
>>>> wanted to run it in distributed environment,
>>>> later I killed the job(Ctrl+c) and rebuild the jar with some addones in
>>>> the
>>>> bolts and made it to run in a production mode
>>>> by StormSubmitter.submitTopology(.,.,.).
>>>>
>>>> I am not sure what was the reason that the job was not able to run as I
>>>> changed the build but soon after cluster restart the same distributed
>>>> job
>>>> started running.
>>>>
>>>> Now if I am killing the existing running job via terminal or via storm
>>>> UI
>>>> it kills pretty fine but Storm is over and over referencing the same old
>>>> jar even thought I am making a fresh build.
>>>>
>>>
>>> sorry for repeating myself - *where* is it referencing the same old
>>> jar... in the log files?
>>>
>>> If you're only judging by the functionality/behavior, it might be an
>>> issue of unchanged serialVersionUIDs and cached classes...
>>>
>>> Regards,
>>> Jens
>>> --
>>> Jens-U. Mozdzen                         voice   : +49-40-559 51 75
>>> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
>>> Postfach 61 03 15                       mobile  : +49-179-4 98 21 98
>>> D-22423 Hamburg                         e-mail  : jmozdzen@nde.ag
>>>
>>>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>>           Sitz und Registergericht: Hamburg, HRB 90934
>>>                   Vorstand: Jens-U. Mozdzen
>>>                    USt-IdNr. DE 814 013 983
>>>
>>>
>>
>

Re: Storm Job picking cached/old jar file

Posted by Vineet Mishra <cl...@gmail.com>.
Well thanks all, I got it working, it seems that the topology jar itself
was having the topology in the build path, the reason of which it was
referring the old code. I got it working but only in local mode.

Moreover I was looking out a way as how to invoke job in distributed mode,
what currently I was doing was copying the dependencies(kafka and other few
possible) to Storm lib folder and Storm was picking up those dependencies
while running, but in distributed mode

1) I am not sure is it a good way to put the external dependencies in the
Storm lib folder
2) If its not a way, then how should we schedule the job in distributed mode

Looking out for quick response.

Thanks in advance!

On Wed, Jan 28, 2015 at 11:49 AM, Vineet Mishra <cl...@gmail.com>
wrote:

> Hi Jens,
>
> No its not referring to the old jars in the log(bcoz that has been already
> deleted) rather its picking the changed jar but more over the fun fact is
> that even storm is picking up new jar but still running the old code(as I
> can't see the new code running up) in my topology.
>
> Thanks!
>
> On Tue, Jan 27, 2015 at 11:07 PM, Jens-Uwe Mozdzen <jm...@nde.ag>
> wrote:
>
>> Hi Vineet,
>>
>> Zitat von Vineet Mishra <cl...@gmail.com>:
>>
>>> Hi Naresh and Jens,
>>>
>>> Well first I tried running a job in local mode that was running good,
>>> but I
>>> wanted to run it in distributed environment,
>>> later I killed the job(Ctrl+c) and rebuild the jar with some addones in
>>> the
>>> bolts and made it to run in a production mode
>>> by StormSubmitter.submitTopology(.,.,.).
>>>
>>> I am not sure what was the reason that the job was not able to run as I
>>> changed the build but soon after cluster restart the same distributed job
>>> started running.
>>>
>>> Now if I am killing the existing running job via terminal or via storm UI
>>> it kills pretty fine but Storm is over and over referencing the same old
>>> jar even thought I am making a fresh build.
>>>
>>
>> sorry for repeating myself - *where* is it referencing the same old
>> jar... in the log files?
>>
>> If you're only judging by the functionality/behavior, it might be an
>> issue of unchanged serialVersionUIDs and cached classes...
>>
>> Regards,
>> Jens
>> --
>> Jens-U. Mozdzen                         voice   : +49-40-559 51 75
>> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
>> Postfach 61 03 15                       mobile  : +49-179-4 98 21 98
>> D-22423 Hamburg                         e-mail  : jmozdzen@nde.ag
>>
>>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>>           Sitz und Registergericht: Hamburg, HRB 90934
>>                   Vorstand: Jens-U. Mozdzen
>>                    USt-IdNr. DE 814 013 983
>>
>>
>

Re: Storm Job picking cached/old jar file

Posted by Vineet Mishra <cl...@gmail.com>.
Hi Jens,

No its not referring to the old jars in the log(bcoz that has been already
deleted) rather its picking the changed jar but more over the fun fact is
that even storm is picking up new jar but still running the old code(as I
can't see the new code running up) in my topology.

Thanks!

On Tue, Jan 27, 2015 at 11:07 PM, Jens-Uwe Mozdzen <jm...@nde.ag> wrote:

> Hi Vineet,
>
> Zitat von Vineet Mishra <cl...@gmail.com>:
>
>> Hi Naresh and Jens,
>>
>> Well first I tried running a job in local mode that was running good, but
>> I
>> wanted to run it in distributed environment,
>> later I killed the job(Ctrl+c) and rebuild the jar with some addones in
>> the
>> bolts and made it to run in a production mode
>> by StormSubmitter.submitTopology(.,.,.).
>>
>> I am not sure what was the reason that the job was not able to run as I
>> changed the build but soon after cluster restart the same distributed job
>> started running.
>>
>> Now if I am killing the existing running job via terminal or via storm UI
>> it kills pretty fine but Storm is over and over referencing the same old
>> jar even thought I am making a fresh build.
>>
>
> sorry for repeating myself - *where* is it referencing the same old jar...
> in the log files?
>
> If you're only judging by the functionality/behavior, it might be an issue
> of unchanged serialVersionUIDs and cached classes...
>
> Regards,
> Jens
> --
> Jens-U. Mozdzen                         voice   : +49-40-559 51 75
> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
> Postfach 61 03 15                       mobile  : +49-179-4 98 21 98
> D-22423 Hamburg                         e-mail  : jmozdzen@nde.ag
>
>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>           Sitz und Registergericht: Hamburg, HRB 90934
>                   Vorstand: Jens-U. Mozdzen
>                    USt-IdNr. DE 814 013 983
>
>

Re: Storm Job picking cached/old jar file

Posted by Nathan Marz <na...@nathanmarz.com>.
Storm gives every topology a unique id and treats every topology completely
independently, even if it has the same name as an older topology. So it's
not possible for it to "pick up an old jar". The only way this could happen
is if you added your own jars to Storm's classpath on the worker machines.

It would take a really crazy bug for Storm to mix up topology jars as you
say it's doing, and I don't suspect that's the case. If you think it's
picking up an old jar, I would highly recommend looking at the logs to see
which directories Storm is using to find jars.

On Tue, Jan 27, 2015 at 12:37 PM, Jens-Uwe Mozdzen <jm...@nde.ag> wrote:

> Hi Vineet,
>
> Zitat von Vineet Mishra <cl...@gmail.com>:
>
>> Hi Naresh and Jens,
>>
>> Well first I tried running a job in local mode that was running good, but
>> I
>> wanted to run it in distributed environment,
>> later I killed the job(Ctrl+c) and rebuild the jar with some addones in
>> the
>> bolts and made it to run in a production mode
>> by StormSubmitter.submitTopology(.,.,.).
>>
>> I am not sure what was the reason that the job was not able to run as I
>> changed the build but soon after cluster restart the same distributed job
>> started running.
>>
>> Now if I am killing the existing running job via terminal or via storm UI
>> it kills pretty fine but Storm is over and over referencing the same old
>> jar even thought I am making a fresh build.
>>
>
> sorry for repeating myself - *where* is it referencing the same old jar...
> in the log files?
>
> If you're only judging by the functionality/behavior, it might be an issue
> of unchanged serialVersionUIDs and cached classes...
>
> Regards,
> Jens
> --
> Jens-U. Mozdzen                         voice   : +49-40-559 51 75
> NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
> Postfach 61 03 15                       mobile  : +49-179-4 98 21 98
> D-22423 Hamburg                         e-mail  : jmozdzen@nde.ag
>
>         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
>           Sitz und Registergericht: Hamburg, HRB 90934
>                   Vorstand: Jens-U. Mozdzen
>                    USt-IdNr. DE 814 013 983
>
>


-- 
Twitter: @nathanmarz
http://nathanmarz.com

Re: Storm Job picking cached/old jar file

Posted by Jens-Uwe Mozdzen <jm...@nde.ag>.
Hi Vineet,

Zitat von Vineet Mishra <cl...@gmail.com>:
> Hi Naresh and Jens,
>
> Well first I tried running a job in local mode that was running good, but I
> wanted to run it in distributed environment,
> later I killed the job(Ctrl+c) and rebuild the jar with some addones in the
> bolts and made it to run in a production mode
> by StormSubmitter.submitTopology(.,.,.).
>
> I am not sure what was the reason that the job was not able to run as I
> changed the build but soon after cluster restart the same distributed job
> started running.
>
> Now if I am killing the existing running job via terminal or via storm UI
> it kills pretty fine but Storm is over and over referencing the same old
> jar even thought I am making a fresh build.

sorry for repeating myself - *where* is it referencing the same old  
jar... in the log files?

If you're only judging by the functionality/behavior, it might be an  
issue of unchanged serialVersionUIDs and cached classes...

Regards,
Jens
-- 
Jens-U. Mozdzen                         voice   : +49-40-559 51 75
NDE Netzdesign und -entwicklung AG      fax     : +49-40-559 51 77
Postfach 61 03 15                       mobile  : +49-179-4 98 21 98
D-22423 Hamburg                         e-mail  : jmozdzen@nde.ag

         Vorsitzende des Aufsichtsrates: Angelika Mozdzen
           Sitz und Registergericht: Hamburg, HRB 90934
                   Vorstand: Jens-U. Mozdzen
                    USt-IdNr. DE 814 013 983


Re: Storm Job picking cached/old jar file

Posted by Vineet Mishra <cl...@gmail.com>.
Hi Naresh and Jens,

Well first I tried running a job in local mode that was running good, but I
wanted to run it in distributed environment,
later I killed the job(Ctrl+c) and rebuild the jar with some addones in the
bolts and made it to run in a production mode
by StormSubmitter.submitTopology(.,.,.).

I am not sure what was the reason that the job was not able to run as I
changed the build but soon after cluster restart the same distributed job
started running.

Now if I am killing the existing running job via terminal or via storm UI
it kills pretty fine but Storm is over and over referencing the same old
jar even thought I am making a fresh build.

So the issue is somewhat referring this link

http://mail-archives.apache.org/mod_mbox/storm-dev/201408.mbox/%3CJIRA.12736289.1408831837060.8307.1409003698105@arcas%3E




On Tue, Jan 27, 2015 at 7:10 PM, Naresh Kosgi <na...@gmail.com> wrote:

> When you killed the old topology with the old jar did you wait for the
> timeout period? If you want to kill it right away use the -w 0 flag and
> check the storm UI to make sure the job is gone, like Jens-U mentioned I
> don't think ur looking at the right cause.  I have done this in the past,
> and deployed my topology before it was killed so just guessing this is what
> you might be doing...
>
>
> On Tue, Jan 27, 2015 at 8:24 AM, Vineet Mishra <cl...@gmail.com>
> wrote:
>
>> Hi All,
>>
>> I am stucked at a vague issue. I am having a 3 node Storm
>> Cluster(apache-storm-0.9.3) with configuration given below,
>>
>> node1 - Nimbus, UI
>> node2 - Supervisor, Worker
>> node3 - Supervisor, Worker
>>
>> I have written a topology and I was running it through storm, but soon
>> after I made some changes in Bolts, created a fresh jar, redeployed it and
>> ran again with the Storm jar command, but it still seems to be referring
>> the old jar somewhere cached(although I have already deleted that old jar).
>>
>> I am seriously stucked at this issue, tried looking at some post relating
>> the same issue but couldn't find any sufficing answer.
>>
>> Looking for expert comments.
>>
>> Thanks in advance!
>>
>>
>>
>

Re: Storm Job picking cached/old jar file

Posted by Naresh Kosgi <na...@gmail.com>.
When you killed the old topology with the old jar did you wait for the
timeout period? If you want to kill it right away use the -w 0 flag and
check the storm UI to make sure the job is gone, like Jens-U mentioned I
don't think ur looking at the right cause.  I have done this in the past,
and deployed my topology before it was killed so just guessing this is what
you might be doing...


On Tue, Jan 27, 2015 at 8:24 AM, Vineet Mishra <cl...@gmail.com>
wrote:

> Hi All,
>
> I am stucked at a vague issue. I am having a 3 node Storm
> Cluster(apache-storm-0.9.3) with configuration given below,
>
> node1 - Nimbus, UI
> node2 - Supervisor, Worker
> node3 - Supervisor, Worker
>
> I have written a topology and I was running it through storm, but soon
> after I made some changes in Bolts, created a fresh jar, redeployed it and
> ran again with the Storm jar command, but it still seems to be referring
> the old jar somewhere cached(although I have already deleted that old jar).
>
> I am seriously stucked at this issue, tried looking at some post relating
> the same issue but couldn't find any sufficing answer.
>
> Looking for expert comments.
>
> Thanks in advance!
>
>
>

Re: Storm Job picking cached/old jar file

Posted by "Jens-U. Mozdzen" <jm...@nde.ag>.
Hi Vineet,

Zitat von Vineet Mishra <cl...@gmail.com>:
> Hi All,
>
> I am stucked at a vague issue. I am having a 3 node Storm
> Cluster(apache-storm-0.9.3) with configuration given below,
>
> node1 - Nimbus, UI
> node2 - Supervisor, Worker
> node3 - Supervisor, Worker
>
> I have written a topology and I was running it through storm, but soon
> after I made some changes in Bolts, created a fresh jar, redeployed it and
> ran again with the Storm jar command, but it still seems to be referring
> the old jar somewhere cached(although I have already deleted that old jar).

you might want to add *why* you think it's still referring to the old  
jar - log entries that still show the old uuid? Objects behaving "the  
old way"? Maybe you're barking up the wrong tree, such info might help  
to find out the root cause of the issue.

Regards,
Jens