You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mesos.apache.org by Pradeep Chhetri <pr...@gmail.com> on 2015/07/03 19:06:48 UTC

Running storm over mesos

Hello all,

I am trying to run Storm over Mesos using the tutorial (
http://open.mesosphere.com/tutorials/run-storm-on-mesos) over vagrant. When
I am trying to submit a sample topology, it is not spawning any storm
supervisors over the mesos-slaves. I didn't find anything interesting in
the logs as well. Can someone help in figuring out the problem.

Thank you.

-- 
Pradeep Chhetri

Re: Running storm over mesos

Posted by Pradeep Chhetri <pr...@gmail.com>.
Hello Tim,

I guess I found the issue. The memory which was allocated to both the
vagrant vms where i was running mesos-slaves was less than the minimum
memory required for spawning a supervisor i guess. I just changed the VM
limits and respawned one of them. Now, the supervisor ran but it failed due
to slave going offline. Here are some of the logs:

I0703 18:19:39.588933  3177 slave.cpp:1144] Got assigned task node2-31000
for framework 20150703-151956-169978048-5050-12019-0001
I0703 18:19:39.589437  3177 gc.cpp:84] Unscheduling
'/tmp/mesos/slaves/20150703-173159-169978048-5050-23609-S0/frameworks/20150703-151956-169978048-5050-12019-0001'
from gc
I0703 18:19:39.589656  3177 gc.cpp:84] Unscheduling
'/tmp/mesos/slaves/20150703-173159-169978048-5050-23609-S0/frameworks/20150703-151956-169978048-5050-12019-0001/executors/myTopo-1-1435938251'
from gc
I0703 18:19:39.589779  3177 slave.cpp:1254] Launching task node2-31000 for
framework 20150703-151956-169978048-5050-12019-0001
I0703 18:19:39.600589  3177 slave.cpp:4208] Launching executor
myTopo-1-1435938251 of framework 20150703-151956-169978048-5050-12019-0001
in work directory
'/tmp/mesos/slaves/20150703-173159-169978048-5050-23609-S0/frameworks/20150703-151956-169978048-5050-12019-0001/executors/myTopo-1-1435938251/runs/693f59db-7b74-411a-ba4d-c83f32ee2619'
I0703 18:19:39.600769  3177 slave.cpp:1401] Queuing task 'node2-31000' for
executor myTopo-1-1435938251 of framework
'20150703-151956-169978048-5050-12019-0001
I0703 18:19:39.600838  3177 slave.cpp:1144] Got assigned task node2-31001
for framework 20150703-151956-169978048-5050-12019-0001
I0703 18:19:39.603096  3177 gc.cpp:84] Unscheduling
'/tmp/mesos/slaves/20150703-173159-169978048-5050-23609-S0/frameworks/20150703-151956-169978048-5050-12019-0001/executors/againTopo-2-1435942258'
from gc
I0703 18:19:39.603212  3177 slave.cpp:1254] Launching task node2-31001 for
framework 20150703-151956-169978048-5050-12019-0001
I0703 18:19:39.601094  3178 containerizer.cpp:484] Starting container
'693f59db-7b74-411a-ba4d-c83f32ee2619' for executor 'myTopo-1-1435938251'
of framework '20150703-151956-169978048-5050-12019-0001'
I0703 18:19:39.606645  3178 launcher.cpp:130] Forked child with pid '3187'
for container '693f59db-7b74-411a-ba4d-c83f32ee2619'
I0703 18:19:39.619261  3175 fetcher.cpp:238] Fetching URIs using command
'/usr/libexec/mesos/mesos-fetcher'
I0703 18:19:39.619355  3177 slave.cpp:4208] Launching executor
againTopo-2-1435942258 of framework
20150703-151956-169978048-5050-12019-0001 in work directory
'/tmp/mesos/slaves/20150703-173159-169978048-5050-23609-S0/frameworks/20150703-151956-169978048-5050-12019-0001/executors/againTopo-2-1435942258/runs/2cf39864-16e5-45bf-98a1-3e2e6fffdac9'
I0703 18:19:39.629672  3177 slave.cpp:1401] Queuing task 'node2-31001' for
executor againTopo-2-1435942258 of framework
'20150703-151956-169978048-5050-12019-0001
I0703 18:19:39.629956  3175 containerizer.cpp:484] Starting container
'2cf39864-16e5-45bf-98a1-3e2e6fffdac9' for executor
'againTopo-2-1435942258' of framework
'20150703-151956-169978048-5050-12019-0001'
I0703 18:19:39.633669  3175 launcher.cpp:130] Forked child with pid '3191'
for container '2cf39864-16e5-45bf-98a1-3e2e6fffdac9'
I0703 18:19:39.660353  3175 fetcher.cpp:238] Fetching URIs using command
'/usr/libexec/mesos/mesos-fetcher'
I0703 18:19:57.502316  3176 slave.cpp:3165] Monitoring executor
'myTopo-1-1435938251' of framework
'20150703-151956-169978048-5050-12019-0001' in container
'693f59db-7b74-411a-ba4d-c83f32ee2619'
I0703 18:19:57.603291  3181 containerizer.cpp:1123] Executor for container
'693f59db-7b74-411a-ba4d-c83f32ee2619' has exited
I0703 18:19:57.603431  3181 containerizer.cpp:918] Destroying container
'693f59db-7b74-411a-ba4d-c83f32ee2619'
I0703 18:19:57.617431  3174 slave.cpp:3223] Executor 'myTopo-1-1435938251'
of framework 20150703-151956-169978048-5050-12019-0001 exited with status 1
I0703 18:19:57.618479  3174 slave.cpp:2531] Handling status update
TASK_LOST (UUID: da7cce06-4089-4366-b156-4cd9d527a903) for task node2-31000
of framework 20150703-151956-169978048-5050-12019-0001 from @0.0.0.0:0
W0703 18:19:57.619213  3174 containerizer.cpp:814] Ignoring update for
unknown container: 693f59db-7b74-411a-ba4d-c83f32ee2619
I0703 18:19:57.619920  3174 status_update_manager.cpp:317] Received status
update TASK_LOST (UUID: da7cce06-4089-4366-b156-4cd9d527a903) for task
node2-31000 of framework 20150703-151956-169978048-5050-12019-0001
I0703 18:19:57.620934  3174 slave.cpp:2776] Forwarding the update TASK_LOST
(UUID: da7cce06-4089-4366-b156-4cd9d527a903) for task node2-31000 of
framework 20150703-151956-169978048-5050-12019-0001 to
master@192.168.33.10:5050
I0703 18:19:57.628343  3179 status_update_manager.cpp:389] Received status
update acknowledgement (UUID: da7cce06-4089-4366-b156-4cd9d527a903) for
task node2-31000 of framework 20150703-151956-169978048-5050-12019-0001
I0703 18:19:57.628581  3179 slave.cpp:3332] Cleaning up executor
'myTopo-1-1435938251' of framework 20150703-151956-169978048-5050-12019-0001
I0703 18:19:57.629472  3179 gc.cpp:56] Scheduling
'/tmp/mesos/slaves/20150703-173159-169978048-5050-23609-S0/frameworks/20150703-151956-169978048-5050-12019-0001/executors/myTopo-1-1435938251/runs/693f59db-7b74-411a-ba4d-c83f32ee2619'
for gc 6.99999271514074days in the future
I0703 18:19:57.629901  3179 gc.cpp:56] Scheduling
'/tmp/mesos/slaves/20150703-173159-169978048-5050-23609-S0/frameworks/20150703-151956-169978048-5050-12019-0001/executors/myTopo-1-1435938251'
for gc 6.9999927146637days in the future
I0703 18:20:18.695482  3176 slave.cpp:3648] Current disk usage 2.74%. Max
allowed age: 6.108473859231898days

Thank you.

On Fri, Jul 3, 2015 at 7:01 PM, Pradeep Chhetri <pradeep.chhetri89@gmail.com
> wrote:

> Hello Tim,
>
> The nimbus logs says :
>
> 2015-07-03 17:48:47 o.a.z.ClientCnxn [INFO] Session establishment complete
> on server node1/192.168.33.10:2181, sessionid = 0x14e548011e40024,
> negotiated timeout = 20000
> 2015-07-03 17:48:47 b.s.d.nimbus [INFO] Starting Nimbus server...
> 2015-07-03 17:48:48 s.m.MesosNimbus [INFO] Currently have 2 offers buffered
> 2015-07-03 17:48:48 s.m.MesosNimbus [INFO] Topologies that need
> assignments: #{"newTopo-3-1435942676" "ooooo-5-1435944054"
> "againTopo-2-1435942258" "test-4-1435943781" "myTopo-1-1435938251"}
> 2015-07-03 17:48:48 s.m.MesosNimbus [INFO] Number of available slots: 0
> 2015-07-03 17:48:58 s.m.MesosNimbus [INFO] Currently have 2 offers buffered
> 2015-07-03 17:48:58 s.m.MesosNimbus [INFO] Topologies that need
> assignments: #{"newTopo-3-1435942676" "ooooo-5-1435944054"
> "againTopo-2-1435942258" "test-4-1435943781" "myTopo-1-1435938251"}
> 2015-07-03 17:48:58 s.m.MesosNimbus [INFO] Number of available slots: 0
> 2015-07-03 17:49:08 s.m.MesosNimbus [INFO] Currently have 2 offers buffered
> 2015-07-03 17:49:08 s.m.MesosNimbus [INFO] Topologies that need
> assignments: #{"newTopo-3-1435942676" "ooooo-5-1435944054"
> "againTopo-2-1435942258" "test-4-1435943781" "myTopo-1-1435938251"}
> 2015-07-03 17:49:08 s.m.MesosNimbus [INFO] Number of available slots: 0
> 2015-07-03 17:49:18 s.m.MesosNimbus [INFO] Currently have 2 offers buffered
> 2015-07-03 17:49:18 s.m.MesosNimbus [INFO] Topologies that need
> assignments: #{"newTopo-3-1435942676" "ooooo-5-1435944054"
> "againTopo-2-1435942258" "test-4-1435943781" "myTopo-1-1435938251"}
> 2015-07-03 17:49:18 s.m.MesosNimbus [INFO] Number of available slots: 0
>
>
> There is something interesting in mesos-slave logs saying:
>
> W0703 17:48:46.204479 23660 slave.cpp:1934] Ignoring updating pid for
> framework 20150703-151956-169978048-5050-12019-0001 because it does not
> exist
>
> The framework id is that of storm.
>
> Mesos-master logs says:
>
> lave(1)@192.168.33.10:5051 (node1) for framework
> 20150703-151956-169978048-5050-12019-0001 (Storm!!!) at
> scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
> I0703 17:54:21.237983 23627 master.cpp:2273] Processing ACCEPT call for
> offers: [ 20150703-173159-169978048-5050-23609-O230 ] on slave
> 20150703-151956-169978048-5050-12019-S3 at slave(1)@192.168.33.11:5051
> (node2) for framework 20150703-151956-169978048-5050-12019-0001 (Storm!!!)
> at scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
> I0703 17:54:21.239336 23627 hierarchical.hpp:648] Recovered cpus(*):1;
> mem(*):229; disk(*):34260; ports(*):[31000-32000] (total allocatable:
> cpus(*):1; mem(*):229; disk(*):34260; ports(*):[31000-32000]) on slave
> 20150703-151956-169978048-5050-12019-S0 from framework
> 20150703-151956-169978048-5050-12019-0001
> I0703 17:54:21.240054 23627 hierarchical.hpp:648] Recovered cpus(*):2;
> mem(*):497; disk(*):34260; ports(*):[31000-32000] (total allocatable:
> cpus(*):2; mem(*):497; disk(*):34260; ports(*):[31000-32000]) on slave
> 20150703-151956-169978048-5050-12019-S3 from framework
> 20150703-151956-169978048-5050-12019-0001
> I0703 17:54:22.108049 23621 http.cpp:516] HTTP request for
> '/master/state.json'
> I0703 17:54:24.108803 23621 http.cpp:516] HTTP request for
> '/master/state.json'
> I0703 17:54:26.965812 23621 master.cpp:3760] Sending 2 offers to framework
> 20150703-151956-169978048-5050-12019-0001 (Storm!!!) at
> scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
> I0703 17:54:33.117069 23622 http.cpp:516] HTTP request for
> '/master/state.json'
> I0703 17:54:35.118489 23622 http.cpp:516] HTTP request for
> '/master/state.json'
> I0703 17:54:41.238107 23622 master.cpp:2273] Processing ACCEPT call for
> offers: [ 20150703-173159-169978048-5050-23609-O231 ] on slave
> 20150703-151956-169978048-5050-12019-S3 at slave(1)@192.168.33.11:5051
> (node2) for framework 20150703-151956-169978048-5050-12019-0001 (Storm!!!)
> at scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
> I0703 17:54:41.238258 23622 master.cpp:2273] Processing ACCEPT call for
> offers: [ 20150703-173159-169978048-5050-23609-O232 ] on slave
> 20150703-151956-169978048-5050-12019-S0 at slave(1)@192.168.33.10:5051
> (node1) for framework 20150703-151956-169978048-5050-12019-0001 (Storm!!!)
> at scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
> I0703 17:54:41.239572 23622 hierarchical.hpp:648] Recovered cpus(*):2;
> mem(*):497; disk(*):34260; ports(*):[31000-32000] (total allocatable:
> cpus(*):2; mem(*):497; disk(*):34260; ports(*):[31000-32000]) on slave
> 20150703-151956-169978048-5050-12019-S3 from framework
> 20150703-151956-169978048-5050-12019-0001
> I0703 17:54:41.240078 23622 hierarchical.hpp:648] Recovered cpus(*):1;
> mem(*):229; disk(*):34260; ports(*):[31000-32000] (total allocatable:
> cpus(*):1; mem(*):229; disk(*):34260; ports(*):[31000-32000]) on slave
> 20150703-151956-169978048-5050-12019-S0 from framework
> 20150703-151956-169978048-5050-12019-0001
> I0703 17:54:44.120290 23625 http.cpp:516] HTTP request for
> '/master/state.json'
>
> Sorry for just copy pasting the logs directly and making it unreadable.
>
> Thank you.
>
>
> On Fri, Jul 3, 2015 at 7:50 PM, CCAAT <cc...@tampabay.rr.com> wrote:
>
>> On 07/03/2015 12:30 PM, Tim Chen wrote:
>>
>>> Hi Pradeep,
>>>
>>> Without any more information it's quite impossible to know what's going
>>> on.
>>> What's in the slave logs and storm framework logs?
>>> Tim
>>>
>>> On Fri, Jul 3, 2015 at 10:06 AM, Pradeep Chhetri
>>> <pradeep.chhetri89@gmail.com <ma...@gmail.com>>
>>> wrote:
>>>
>>>     Hello all,
>>>
>>>     I am trying to run Storm over Mesos using the tutorial
>>>     (http://open.mesosphere.com/tutorials/run-storm-on-mesos) over
>>>     vagrant. When I am trying to submit a sample topology, it is not
>>>     spawning any storm supervisors over the mesos-slaves. I didn't find
>>>     anything interesting in the logs as well. Can someone help in
>>>     figuring out the problem.
>>>     Pradeep Chhetri
>>>
>>
>> Sometimes it helps to just read about some of the various ways to use
>> storm. Here are some links for reading at what others have done::
>>
>>
>>
>> http://tutorials.github.io/pages/creating-a-production-storm-cluster.html?ts=1340499018#.VM67mz5VHxg
>>
>> https://storm.apache.org/documentation/Setting-up-a-Storm-cluster.html
>>
>>
>> And of coarse this reference, just to be complete.
>> https://storm.canonical.com/
>>
>
>
>
> --
> Pradeep Chhetri
>
> In the world of Linux, who needs Windows and Gates...
>



-- 
Pradeep Chhetri

In the world of Linux, who needs Windows and Gates...

Re: Running storm over mesos

Posted by Pradeep Chhetri <pr...@gmail.com>.
Hello Tim,

The nimbus logs says :

2015-07-03 17:48:47 o.a.z.ClientCnxn [INFO] Session establishment complete
on server node1/192.168.33.10:2181, sessionid = 0x14e548011e40024,
negotiated timeout = 20000
2015-07-03 17:48:47 b.s.d.nimbus [INFO] Starting Nimbus server...
2015-07-03 17:48:48 s.m.MesosNimbus [INFO] Currently have 2 offers buffered
2015-07-03 17:48:48 s.m.MesosNimbus [INFO] Topologies that need
assignments: #{"newTopo-3-1435942676" "ooooo-5-1435944054"
"againTopo-2-1435942258" "test-4-1435943781" "myTopo-1-1435938251"}
2015-07-03 17:48:48 s.m.MesosNimbus [INFO] Number of available slots: 0
2015-07-03 17:48:58 s.m.MesosNimbus [INFO] Currently have 2 offers buffered
2015-07-03 17:48:58 s.m.MesosNimbus [INFO] Topologies that need
assignments: #{"newTopo-3-1435942676" "ooooo-5-1435944054"
"againTopo-2-1435942258" "test-4-1435943781" "myTopo-1-1435938251"}
2015-07-03 17:48:58 s.m.MesosNimbus [INFO] Number of available slots: 0
2015-07-03 17:49:08 s.m.MesosNimbus [INFO] Currently have 2 offers buffered
2015-07-03 17:49:08 s.m.MesosNimbus [INFO] Topologies that need
assignments: #{"newTopo-3-1435942676" "ooooo-5-1435944054"
"againTopo-2-1435942258" "test-4-1435943781" "myTopo-1-1435938251"}
2015-07-03 17:49:08 s.m.MesosNimbus [INFO] Number of available slots: 0
2015-07-03 17:49:18 s.m.MesosNimbus [INFO] Currently have 2 offers buffered
2015-07-03 17:49:18 s.m.MesosNimbus [INFO] Topologies that need
assignments: #{"newTopo-3-1435942676" "ooooo-5-1435944054"
"againTopo-2-1435942258" "test-4-1435943781" "myTopo-1-1435938251"}
2015-07-03 17:49:18 s.m.MesosNimbus [INFO] Number of available slots: 0


There is something interesting in mesos-slave logs saying:

W0703 17:48:46.204479 23660 slave.cpp:1934] Ignoring updating pid for
framework 20150703-151956-169978048-5050-12019-0001 because it does not
exist

The framework id is that of storm.

Mesos-master logs says:

lave(1)@192.168.33.10:5051 (node1) for framework
20150703-151956-169978048-5050-12019-0001 (Storm!!!) at
scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
I0703 17:54:21.237983 23627 master.cpp:2273] Processing ACCEPT call for
offers: [ 20150703-173159-169978048-5050-23609-O230 ] on slave
20150703-151956-169978048-5050-12019-S3 at slave(1)@192.168.33.11:5051
(node2) for framework 20150703-151956-169978048-5050-12019-0001 (Storm!!!)
at scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
I0703 17:54:21.239336 23627 hierarchical.hpp:648] Recovered cpus(*):1;
mem(*):229; disk(*):34260; ports(*):[31000-32000] (total allocatable:
cpus(*):1; mem(*):229; disk(*):34260; ports(*):[31000-32000]) on slave
20150703-151956-169978048-5050-12019-S0 from framework
20150703-151956-169978048-5050-12019-0001
I0703 17:54:21.240054 23627 hierarchical.hpp:648] Recovered cpus(*):2;
mem(*):497; disk(*):34260; ports(*):[31000-32000] (total allocatable:
cpus(*):2; mem(*):497; disk(*):34260; ports(*):[31000-32000]) on slave
20150703-151956-169978048-5050-12019-S3 from framework
20150703-151956-169978048-5050-12019-0001
I0703 17:54:22.108049 23621 http.cpp:516] HTTP request for
'/master/state.json'
I0703 17:54:24.108803 23621 http.cpp:516] HTTP request for
'/master/state.json'
I0703 17:54:26.965812 23621 master.cpp:3760] Sending 2 offers to framework
20150703-151956-169978048-5050-12019-0001 (Storm!!!) at
scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
I0703 17:54:33.117069 23622 http.cpp:516] HTTP request for
'/master/state.json'
I0703 17:54:35.118489 23622 http.cpp:516] HTTP request for
'/master/state.json'
I0703 17:54:41.238107 23622 master.cpp:2273] Processing ACCEPT call for
offers: [ 20150703-173159-169978048-5050-23609-O231 ] on slave
20150703-151956-169978048-5050-12019-S3 at slave(1)@192.168.33.11:5051
(node2) for framework 20150703-151956-169978048-5050-12019-0001 (Storm!!!)
at scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
I0703 17:54:41.238258 23622 master.cpp:2273] Processing ACCEPT call for
offers: [ 20150703-173159-169978048-5050-23609-O232 ] on slave
20150703-151956-169978048-5050-12019-S0 at slave(1)@192.168.33.10:5051
(node1) for framework 20150703-151956-169978048-5050-12019-0001 (Storm!!!)
at scheduler-4bc2ee4e-7d62-4ef9-b04c-14fb92ca3ee1@192.168.33.10:38445
I0703 17:54:41.239572 23622 hierarchical.hpp:648] Recovered cpus(*):2;
mem(*):497; disk(*):34260; ports(*):[31000-32000] (total allocatable:
cpus(*):2; mem(*):497; disk(*):34260; ports(*):[31000-32000]) on slave
20150703-151956-169978048-5050-12019-S3 from framework
20150703-151956-169978048-5050-12019-0001
I0703 17:54:41.240078 23622 hierarchical.hpp:648] Recovered cpus(*):1;
mem(*):229; disk(*):34260; ports(*):[31000-32000] (total allocatable:
cpus(*):1; mem(*):229; disk(*):34260; ports(*):[31000-32000]) on slave
20150703-151956-169978048-5050-12019-S0 from framework
20150703-151956-169978048-5050-12019-0001
I0703 17:54:44.120290 23625 http.cpp:516] HTTP request for
'/master/state.json'

Sorry for just copy pasting the logs directly and making it unreadable.

Thank you.


On Fri, Jul 3, 2015 at 7:50 PM, CCAAT <cc...@tampabay.rr.com> wrote:

> On 07/03/2015 12:30 PM, Tim Chen wrote:
>
>> Hi Pradeep,
>>
>> Without any more information it's quite impossible to know what's going
>> on.
>> What's in the slave logs and storm framework logs?
>> Tim
>>
>> On Fri, Jul 3, 2015 at 10:06 AM, Pradeep Chhetri
>> <pradeep.chhetri89@gmail.com <ma...@gmail.com>> wrote:
>>
>>     Hello all,
>>
>>     I am trying to run Storm over Mesos using the tutorial
>>     (http://open.mesosphere.com/tutorials/run-storm-on-mesos) over
>>     vagrant. When I am trying to submit a sample topology, it is not
>>     spawning any storm supervisors over the mesos-slaves. I didn't find
>>     anything interesting in the logs as well. Can someone help in
>>     figuring out the problem.
>>     Pradeep Chhetri
>>
>
> Sometimes it helps to just read about some of the various ways to use
> storm. Here are some links for reading at what others have done::
>
>
>
> http://tutorials.github.io/pages/creating-a-production-storm-cluster.html?ts=1340499018#.VM67mz5VHxg
>
> https://storm.apache.org/documentation/Setting-up-a-Storm-cluster.html
>
>
> And of coarse this reference, just to be complete.
> https://storm.canonical.com/
>



-- 
Pradeep Chhetri

In the world of Linux, who needs Windows and Gates...

Re: Running storm over mesos

Posted by CCAAT <cc...@tampabay.rr.com>.
On 07/03/2015 12:30 PM, Tim Chen wrote:
> Hi Pradeep,
>
> Without any more information it's quite impossible to know what's going on.
> What's in the slave logs and storm framework logs?
> Tim
>
> On Fri, Jul 3, 2015 at 10:06 AM, Pradeep Chhetri
> <pradeep.chhetri89@gmail.com <ma...@gmail.com>> wrote:
>
>     Hello all,
>
>     I am trying to run Storm over Mesos using the tutorial
>     (http://open.mesosphere.com/tutorials/run-storm-on-mesos) over
>     vagrant. When I am trying to submit a sample topology, it is not
>     spawning any storm supervisors over the mesos-slaves. I didn't find
>     anything interesting in the logs as well. Can someone help in
>     figuring out the problem.
>     Pradeep Chhetri

Sometimes it helps to just read about some of the various ways to use 
storm. Here are some links for reading at what others have done::


http://tutorials.github.io/pages/creating-a-production-storm-cluster.html?ts=1340499018#.VM67mz5VHxg

https://storm.apache.org/documentation/Setting-up-a-Storm-cluster.html


And of coarse this reference, just to be complete.
https://storm.canonical.com/

Re: Running storm over mesos

Posted by Tim Chen <ti...@mesosphere.io>.
Hi Pradeep,

Without any more information it's quite impossible to know what's going on.

What's in the slave logs and storm framework logs?

Tim

On Fri, Jul 3, 2015 at 10:06 AM, Pradeep Chhetri <
pradeep.chhetri89@gmail.com> wrote:

> Hello all,
>
> I am trying to run Storm over Mesos using the tutorial (
> http://open.mesosphere.com/tutorials/run-storm-on-mesos) over vagrant.
> When I am trying to submit a sample topology, it is not spawning any storm
> supervisors over the mesos-slaves. I didn't find anything interesting in
> the logs as well. Can someone help in figuring out the problem.
>
> Thank you.
>
> --
> Pradeep Chhetri
>
>