You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@storm.apache.org by ram kumar <ra...@gmail.com> on 2016/05/18 14:06:30 UTC
storm sparse submit error
Hi,
I tried to submit a storm topology in production mode (streamparse)
$ *sparse submit*
Cleaning from prior builds...
Creating topology Uber-JAR...
Uber-JAR created:
/home/ram/PYTHON/test/_build/Data-0.0.1-SNAPSHOT-standalone.jar
Deploying "customer-metadata" topology...
Traceback (most recent call last):
File "/home/ram/PYTHON2.7-ENV/bin/sparse", line 11, in <module>
sys.exit(main())
File
"/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/sparse.py",
line 57, in main
args.func(args)
File
"/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/submit.py",
line 247, in main
wait=args.wait, simple_jar=args.simple_jar)
File
"/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/submit.py",
line 197, in submit_topology
with ssh_tunnel(env_config.get("user"), host, 6627, port):
File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File
"/home/ram/PYTHON/lib/python2.7/site-packages/streamparse/contextmanagers.py",
line 32, in ssh_tunnel
"tunnel to {}:{}.".format(local_port, host, remote_port))
*IOError: Local port: 6627 already in use, unable to open ssh tunnel to
10.154.9.166:6627 <http://10.154.9.166:6627>*
I check if the port 6627 is busy. But its not.
Can anyone help me with this,
Thanks
Re: storm sparse submit error
Posted by Sai Dilip Reddy Kiralam <dk...@aadhya-analytics.com>.
Hi,
you can find the logs of the topologies in storm path /storm/logs directory.
*Best regards,*
*K.Sai Dilip Reddy.*
On Thu, May 19, 2016 at 11:49 AM, ram kumar <ra...@gmail.com> wrote:
> Storm : 0.10.1
> Streamparse : 2.1.4
>
> I was running storm nimbus in the same IP
> (PYTHON) $ sudo netstat -lpn | grep :6627
> tcp 0 0 :::6627 :::*
> LISTEN 29199/java
> (PYTHON) ]$
>
> Now, when I ran topology in different IP, it works fine
>
> {{{
> $ sparse submit
> Cleaning from prior builds...
> Creating topology Uber-JAR...
> Uber-JAR created:
> /home/ram/PYTHON/test/_build/Data-0.0.1-SNAPSHOT-standalone.jar
> Deploying "customer" topology...
> ram@10.218.173.100's password:
> ssh tunnel to Nimbus 10.218.173.100:6627 established.
> Routing Python logging to /home/ram/log/storm/streamparse.
> Running lein command to submit topology to nimbus:
> lein run -m streamparse.commands.submit_topology/-main
> topologies/customer.clj --option 'topology.workers=2' --option
> 'topology.acker.executors=2' --option '
> *streamparse.log.path="/home/ram/log/storm/streamparse"'* --option
> 'streamparse.log.max_bytes=1000000' --option
> 'streamparse.log.backup_count=10' --option 'streamparse.log.level="info"'
> 2490 [main] INFO b.s.u.Utils - Using defaults.yaml from resources
> {:option {streamparse.log.level info, streamparse.log.backup_count 10,
> streamparse.log.max_bytes 1000000, streamparse.log.path
> /home/ram/log/storm/streamparse, topology.acker.executors 2,
> topology.workers 2}, :debug false, :port 6627, :host localhost, :help false}
> 3494 [main] INFO b.s.u.Utils - Using defaults.yaml from resources
> 3526 [main] INFO b.s.StormSubmitter - Generated ZooKeeper secret payload
> for MD5-digest: -7825715999852442250:-4871326514649406713
> 3528 [main] INFO b.s.s.a.AuthUtils - Got AutoCreds []
> 3540 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The
> baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5]
> 3546 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The
> baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5]
> 3568 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The
> baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5]
> 3610 [main] INFO b.s.StormSubmitter - Uploading topology jar
> /home/ram/PYTHON/customer/_build/Data-0.0.1-SNAPSHOT-standalone.jar to
> assigned location:
> /home/ram/log/nimbus/inbox/stormjar-f35e27cf-460a-4146-ba4b-348ed7c3ffc6.jar
> 5927 [main] INFO b.s.StormSubmitter - Successfully uploaded topology jar
> to assigned location:
> /home/ram/log/nimbus/inbox/stormjar-f35e27cf-460a-4146-ba4b-348ed7c3ffc6.jar
> 5927 [main] INFO b.s.StormSubmitter - Submitting topology data in
> distributed mode with conf
> {"storm.zookeeper.topology.auth.scheme":"digest","streamparse.log.backup_count":10,"storm.zookeeper.topology.auth.payload":"-7825715999852442250:-4871326514649406713","streamparse.log.path":"\/home\/ram\/storm\/streamparse","topology.debug":false,"nimbus.thrift.port":6627,"topology.max.spout.pending":5000,"nimbus.host":"localhost","topology.acker.executors":2,"topology.workers":2,"streamparse.log.max_bytes":1000000,"streamparse.log.level":"info","topology.message.timeout.secs":60}
> 6033 [main] INFO b.s.StormSubmitter - *Finished submitting topology:
> data*
> $
> }}}
>
> Then I produced message to Kafka broker
> But, I can't see the logs for the topology "data"
> in /home/ram/log/storm/streamparse
>
> How will I check the submitted topology's logs
>
> Thanks
>
> On Thu, May 19, 2016 at 12:29 AM, cogumelosmaravilha <
> cogumelosmaravilha@sapo.pt> wrote:
>
>> Can you tell us your Apache Storm version, and Streamparse version.
>> And the size of the jar file created using streamparse?
>> How do you check the port lsof, netstat?
>>
>> Thanks
>>
>>
>>
>> On 18-05-2016 15:06, ram kumar wrote:
>>
>> Hi,
>>
>> I tried to submit a storm topology in production mode (streamparse)
>>
>>
>>
>> $ *sparse submit*
>> Cleaning from prior builds...
>> Creating topology Uber-JAR...
>> Uber-JAR created:
>> /home/ram/PYTHON/test/_build/Data-0.0.1-SNAPSHOT-standalone.jar
>> Deploying "customer-metadata" topology...
>> Traceback (most recent call last):
>> File "/home/ram/PYTHON2.7-ENV/bin/sparse", line 11, in <module>
>> sys.exit(main())
>> File
>> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/sparse.py",
>> line 57, in main
>> args.func(args)
>> File
>> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/submit.py",
>> line 247, in main
>> wait=args.wait, simple_jar=args.simple_jar)
>> File
>> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/submit.py",
>> line 197, in submit_topology
>> with ssh_tunnel(env_config.get("user"), host, 6627, port):
>> File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
>> return self.gen.next()
>> File
>> "/home/ram/PYTHON/lib/python2.7/site-packages/streamparse/contextmanagers.py",
>> line 32, in ssh_tunnel
>> "tunnel to {}:{}.".format(local_port, host, remote_port))
>> *IOError: Local port: 6627 already in use, unable to open ssh tunnel to
>> 10.154.9.166:6627 <http://10.154.9.166:6627>*
>>
>>
>> I check if the port 6627 is busy. But its not.
>>
>> Can anyone help me with this,
>>
>> Thanks
>>
>>
>>
>>
>
Re: storm sparse submit error
Posted by ram kumar <ra...@gmail.com>.
Storm : 0.10.1
Streamparse : 2.1.4
I was running storm nimbus in the same IP
(PYTHON) $ sudo netstat -lpn | grep :6627
tcp 0 0 :::6627 :::*
LISTEN 29199/java
(PYTHON) ]$
Now, when I ran topology in different IP, it works fine
{{{
$ sparse submit
Cleaning from prior builds...
Creating topology Uber-JAR...
Uber-JAR created:
/home/ram/PYTHON/test/_build/Data-0.0.1-SNAPSHOT-standalone.jar
Deploying "customer" topology...
ram@10.218.173.100's password:
ssh tunnel to Nimbus 10.218.173.100:6627 established.
Routing Python logging to /home/ram/log/storm/streamparse.
Running lein command to submit topology to nimbus:
lein run -m streamparse.commands.submit_topology/-main
topologies/customer.clj --option 'topology.workers=2' --option
'topology.acker.executors=2' --option '
*streamparse.log.path="/home/ram/log/storm/streamparse"'* --option
'streamparse.log.max_bytes=1000000' --option
'streamparse.log.backup_count=10' --option 'streamparse.log.level="info"'
2490 [main] INFO b.s.u.Utils - Using defaults.yaml from resources
{:option {streamparse.log.level info, streamparse.log.backup_count 10,
streamparse.log.max_bytes 1000000, streamparse.log.path
/home/ram/log/storm/streamparse, topology.acker.executors 2,
topology.workers 2}, :debug false, :port 6627, :host localhost, :help false}
3494 [main] INFO b.s.u.Utils - Using defaults.yaml from resources
3526 [main] INFO b.s.StormSubmitter - Generated ZooKeeper secret payload
for MD5-digest: -7825715999852442250:-4871326514649406713
3528 [main] INFO b.s.s.a.AuthUtils - Got AutoCreds []
3540 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The
baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5]
3546 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The
baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5]
3568 [main] INFO b.s.u.StormBoundedExponentialBackoffRetry - The
baseSleepTimeMs [2000] the maxSleepTimeMs [60000] the maxRetries [5]
3610 [main] INFO b.s.StormSubmitter - Uploading topology jar
/home/ram/PYTHON/customer/_build/Data-0.0.1-SNAPSHOT-standalone.jar to
assigned location:
/home/ram/log/nimbus/inbox/stormjar-f35e27cf-460a-4146-ba4b-348ed7c3ffc6.jar
5927 [main] INFO b.s.StormSubmitter - Successfully uploaded topology jar
to assigned location:
/home/ram/log/nimbus/inbox/stormjar-f35e27cf-460a-4146-ba4b-348ed7c3ffc6.jar
5927 [main] INFO b.s.StormSubmitter - Submitting topology data in
distributed mode with conf
{"storm.zookeeper.topology.auth.scheme":"digest","streamparse.log.backup_count":10,"storm.zookeeper.topology.auth.payload":"-7825715999852442250:-4871326514649406713","streamparse.log.path":"\/home\/ram\/storm\/streamparse","topology.debug":false,"nimbus.thrift.port":6627,"topology.max.spout.pending":5000,"nimbus.host":"localhost","topology.acker.executors":2,"topology.workers":2,"streamparse.log.max_bytes":1000000,"streamparse.log.level":"info","topology.message.timeout.secs":60}
6033 [main] INFO b.s.StormSubmitter - *Finished submitting topology: data*
$
}}}
Then I produced message to Kafka broker
But, I can't see the logs for the topology "data"
in /home/ram/log/storm/streamparse
How will I check the submitted topology's logs
Thanks
On Thu, May 19, 2016 at 12:29 AM, cogumelosmaravilha <
cogumelosmaravilha@sapo.pt> wrote:
> Can you tell us your Apache Storm version, and Streamparse version.
> And the size of the jar file created using streamparse?
> How do you check the port lsof, netstat?
>
> Thanks
>
>
>
> On 18-05-2016 15:06, ram kumar wrote:
>
> Hi,
>
> I tried to submit a storm topology in production mode (streamparse)
>
>
>
> $ *sparse submit*
> Cleaning from prior builds...
> Creating topology Uber-JAR...
> Uber-JAR created:
> /home/ram/PYTHON/test/_build/Data-0.0.1-SNAPSHOT-standalone.jar
> Deploying "customer-metadata" topology...
> Traceback (most recent call last):
> File "/home/ram/PYTHON2.7-ENV/bin/sparse", line 11, in <module>
> sys.exit(main())
> File
> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/sparse.py",
> line 57, in main
> args.func(args)
> File
> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/submit.py",
> line 247, in main
> wait=args.wait, simple_jar=args.simple_jar)
> File
> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/submit.py",
> line 197, in submit_topology
> with ssh_tunnel(env_config.get("user"), host, 6627, port):
> File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
> return self.gen.next()
> File
> "/home/ram/PYTHON/lib/python2.7/site-packages/streamparse/contextmanagers.py",
> line 32, in ssh_tunnel
> "tunnel to {}:{}.".format(local_port, host, remote_port))
> *IOError: Local port: 6627 already in use, unable to open ssh tunnel to
> 10.154.9.166:6627 <http://10.154.9.166:6627>*
>
>
> I check if the port 6627 is busy. But its not.
>
> Can anyone help me with this,
>
> Thanks
>
>
>
>
Re: storm sparse submit error
Posted by cogumelosmaravilha <co...@sapo.pt>.
Can you tell us your Apache Storm version, and Streamparse version.
And the size of the jar file created using streamparse?
How do you check the port lsof, netstat?
Thanks
On 18-05-2016 15:06, ram kumar wrote:
> Hi,
>
> I tried to submit a storm topology in production mode (streamparse)
>
>
>
> $ *sparse submit*
> Cleaning from prior builds...
> Creating topology Uber-JAR...
> Uber-JAR created:
> /home/ram/PYTHON/test/_build/Data-0.0.1-SNAPSHOT-standalone.jar
> Deploying "customer-metadata" topology...
> Traceback (most recent call last):
> File "/home/ram/PYTHON2.7-ENV/bin/sparse", line 11, in <module>
> sys.exit(main())
> File
> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/sparse.py",
> line 57, in main
> args.func(args)
> File
> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/submit.py",
> line 247, in main
> wait=args.wait, simple_jar=args.simple_jar)
> File
> "/home/ram/PYTHON2.7-ENV/lib/python2.7/site-packages/streamparse/cli/submit.py",
> line 197, in submit_topology
> with ssh_tunnel(env_config.get("user"), host, 6627, port):
> File "/usr/local/lib/python2.7/contextlib.py", line 17, in __enter__
> return self.gen.next()
> File
> "/home/ram/PYTHON/lib/python2.7/site-packages/streamparse/contextmanagers.py",
> line 32, in ssh_tunnel
> "tunnel to {}:{}.".format(local_port, host, remote_port))
> *IOError: Local port: 6627 already in use, unable to open ssh tunnel
> to 10.154.9.166:6627 <http://10.154.9.166:6627>*
>
>
> I check if the port 6627 is busy. But its not.
>
> Can anyone help me with this,
>
> Thanks
>
>