You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@flink.apache.org by Punit Naik <na...@gmail.com> on 2016/05/03 14:51:09 UTC

Flink - start-cluster.sh

Hi

I did all the settings required for cluster setup. but when I ran the
start-cluster.sh script, it only started one jobmanager on the master node.
Logs are written only on the master node. Slaves don't have any logs. And
when I ran a program it said:

Resources available to scheduler: Number of instances=0, total number of
slots=0, available slots=0

Can anyone help please?

-- 
Thank You

Regards

Punit Naik

Re: Flink - start-cluster.sh

Posted by Punit Naik <na...@gmail.com>.
Yes.

On Thu, May 5, 2016 at 3:04 PM, Flavio Pompermaier <po...@okkam.it>
wrote:

> Do you run the start-cluster.sh script with the same user having the ssh
> passwordless login?
>
>
> On Thu, May 5, 2016 at 11:03 AM, Punit Naik <na...@gmail.com>
> wrote:
>
>> Okay, so it was a configuration mistake on my part. but still for me the
>> start-cluster.sh command won't work. It only starts the Jobmanager on the
>> master node for me. Therefore I had to manually start Taskmanagers on every
>> node and it worked fine. Anyone familiar with this issue?
>>
>> On Wed, May 4, 2016 at 1:33 PM, Punit Naik <na...@gmail.com>
>> wrote:
>>
>>> Passwordless SSH has been setup across all the machines. And when I
>>> execute the spark-clsuter.sh script, I can see the master logging into the
>>> slaves but it does not start anything. It just logs in and logs out.
>>>
>>> I have referred to the documentation on official site.
>>>
>>>
>>> https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html
>>>
>>> On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <
>>> pompermaier@okkam.it> wrote:
>>>
>>>> I think your slaves didn't come up...have you configured ssh
>>>> password-less login between the master node (the one running the
>>>> start-cluster.sh) and the task managers (listed in the conf/slaves file)?
>>>>
>>>> Best,
>>>> Flavio
>>>>
>>>> On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <
>>>> balaji.rajagopalan@olacabs.com> wrote:
>>>>
>>>>> What is the flink documentation you were following to set up your
>>>>> cluster , can you point to that ?
>>>>>
>>>>> On Tue, May 3, 2016 at 6:21 PM, Punit Naik <na...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi
>>>>>>
>>>>>> I did all the settings required for cluster setup. but when I ran the
>>>>>> start-cluster.sh script, it only started one jobmanager on the master node.
>>>>>> Logs are written only on the master node. Slaves don't have any logs. And
>>>>>> when I ran a program it said:
>>>>>>
>>>>>> Resources available to scheduler: Number of instances=0, total number
>>>>>> of slots=0, available slots=0
>>>>>>
>>>>>> Can anyone help please?
>>>>>>
>>>>>> --
>>>>>> Thank You
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Punit Naik
>>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> Thank You
>>>
>>> Regards
>>>
>>> Punit Naik
>>>
>>
>>
>>
>> --
>> Thank You
>>
>> Regards
>>
>> Punit Naik
>>
>
>


-- 
Thank You

Regards

Punit Naik

Re: Flink - start-cluster.sh

Posted by Flavio Pompermaier <po...@okkam.it>.
Do you run the start-cluster.sh script with the same user having the ssh
passwordless login?

On Thu, May 5, 2016 at 11:03 AM, Punit Naik <na...@gmail.com> wrote:

> Okay, so it was a configuration mistake on my part. but still for me the
> start-cluster.sh command won't work. It only starts the Jobmanager on the
> master node for me. Therefore I had to manually start Taskmanagers on every
> node and it worked fine. Anyone familiar with this issue?
>
> On Wed, May 4, 2016 at 1:33 PM, Punit Naik <na...@gmail.com> wrote:
>
>> Passwordless SSH has been setup across all the machines. And when I
>> execute the spark-clsuter.sh script, I can see the master logging into the
>> slaves but it does not start anything. It just logs in and logs out.
>>
>> I have referred to the documentation on official site.
>>
>>
>> https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html
>>
>> On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <pompermaier@okkam.it
>> > wrote:
>>
>>> I think your slaves didn't come up...have you configured ssh
>>> password-less login between the master node (the one running the
>>> start-cluster.sh) and the task managers (listed in the conf/slaves file)?
>>>
>>> Best,
>>> Flavio
>>>
>>> On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <
>>> balaji.rajagopalan@olacabs.com> wrote:
>>>
>>>> What is the flink documentation you were following to set up your
>>>> cluster , can you point to that ?
>>>>
>>>> On Tue, May 3, 2016 at 6:21 PM, Punit Naik <na...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> I did all the settings required for cluster setup. but when I ran the
>>>>> start-cluster.sh script, it only started one jobmanager on the master node.
>>>>> Logs are written only on the master node. Slaves don't have any logs. And
>>>>> when I ran a program it said:
>>>>>
>>>>> Resources available to scheduler: Number of instances=0, total number
>>>>> of slots=0, available slots=0
>>>>>
>>>>> Can anyone help please?
>>>>>
>>>>> --
>>>>> Thank You
>>>>>
>>>>> Regards
>>>>>
>>>>> Punit Naik
>>>>>
>>>>
>>>
>>
>>
>> --
>> Thank You
>>
>> Regards
>>
>> Punit Naik
>>
>
>
>
> --
> Thank You
>
> Regards
>
> Punit Naik
>

Re: Flink - start-cluster.sh

Posted by Punit Naik <na...@gmail.com>.
Okay, so it was a configuration mistake on my part. but still for me the
start-cluster.sh command won't work. It only starts the Jobmanager on the
master node for me. Therefore I had to manually start Taskmanagers on every
node and it worked fine. Anyone familiar with this issue?

On Wed, May 4, 2016 at 1:33 PM, Punit Naik <na...@gmail.com> wrote:

> Passwordless SSH has been setup across all the machines. And when I
> execute the spark-clsuter.sh script, I can see the master logging into the
> slaves but it does not start anything. It just logs in and logs out.
>
> I have referred to the documentation on official site.
>
>
> https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html
>
> On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <po...@okkam.it>
> wrote:
>
>> I think your slaves didn't come up...have you configured ssh
>> password-less login between the master node (the one running the
>> start-cluster.sh) and the task managers (listed in the conf/slaves file)?
>>
>> Best,
>> Flavio
>>
>> On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <
>> balaji.rajagopalan@olacabs.com> wrote:
>>
>>> What is the flink documentation you were following to set up your
>>> cluster , can you point to that ?
>>>
>>> On Tue, May 3, 2016 at 6:21 PM, Punit Naik <na...@gmail.com>
>>> wrote:
>>>
>>>> Hi
>>>>
>>>> I did all the settings required for cluster setup. but when I ran the
>>>> start-cluster.sh script, it only started one jobmanager on the master node.
>>>> Logs are written only on the master node. Slaves don't have any logs. And
>>>> when I ran a program it said:
>>>>
>>>> Resources available to scheduler: Number of instances=0, total number
>>>> of slots=0, available slots=0
>>>>
>>>> Can anyone help please?
>>>>
>>>> --
>>>> Thank You
>>>>
>>>> Regards
>>>>
>>>> Punit Naik
>>>>
>>>
>>
>
>
> --
> Thank You
>
> Regards
>
> Punit Naik
>



-- 
Thank You

Regards

Punit Naik

Re: Flink - start-cluster.sh

Posted by Punit Naik <na...@gmail.com>.
Passwordless SSH has been setup across all the machines. And when I execute
the spark-clsuter.sh script, I can see the master logging into the slaves
but it does not start anything. It just logs in and logs out.

I have referred to the documentation on official site.

https://ci.apache.org/projects/flink/flink-docs-release-1.0/quickstart/setup_quickstart.html

On Wed, May 4, 2016 at 12:43 PM, Flavio Pompermaier <po...@okkam.it>
wrote:

> I think your slaves didn't come up...have you configured ssh password-less
> login between the master node (the one running the start-cluster.sh) and
> the task managers (listed in the conf/slaves file)?
>
> Best,
> Flavio
>
> On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <
> balaji.rajagopalan@olacabs.com> wrote:
>
>> What is the flink documentation you were following to set up your cluster
>> , can you point to that ?
>>
>> On Tue, May 3, 2016 at 6:21 PM, Punit Naik <na...@gmail.com>
>> wrote:
>>
>>> Hi
>>>
>>> I did all the settings required for cluster setup. but when I ran the
>>> start-cluster.sh script, it only started one jobmanager on the master node.
>>> Logs are written only on the master node. Slaves don't have any logs. And
>>> when I ran a program it said:
>>>
>>> Resources available to scheduler: Number of instances=0, total number of
>>> slots=0, available slots=0
>>>
>>> Can anyone help please?
>>>
>>> --
>>> Thank You
>>>
>>> Regards
>>>
>>> Punit Naik
>>>
>>
>


-- 
Thank You

Regards

Punit Naik

Re: Flink - start-cluster.sh

Posted by Flavio Pompermaier <po...@okkam.it>.
I think your slaves didn't come up...have you configured ssh password-less
login between the master node (the one running the start-cluster.sh) and
the task managers (listed in the conf/slaves file)?

Best,
Flavio

On Wed, May 4, 2016 at 8:49 AM, Balaji Rajagopalan <
balaji.rajagopalan@olacabs.com> wrote:

> What is the flink documentation you were following to set up your cluster
> , can you point to that ?
>
> On Tue, May 3, 2016 at 6:21 PM, Punit Naik <na...@gmail.com> wrote:
>
>> Hi
>>
>> I did all the settings required for cluster setup. but when I ran the
>> start-cluster.sh script, it only started one jobmanager on the master node.
>> Logs are written only on the master node. Slaves don't have any logs. And
>> when I ran a program it said:
>>
>> Resources available to scheduler: Number of instances=0, total number of
>> slots=0, available slots=0
>>
>> Can anyone help please?
>>
>> --
>> Thank You
>>
>> Regards
>>
>> Punit Naik
>>
>

Re: Flink - start-cluster.sh

Posted by Balaji Rajagopalan <ba...@olacabs.com>.
What is the flink documentation you were following to set up your cluster ,
can you point to that ?

On Tue, May 3, 2016 at 6:21 PM, Punit Naik <na...@gmail.com> wrote:

> Hi
>
> I did all the settings required for cluster setup. but when I ran the
> start-cluster.sh script, it only started one jobmanager on the master node.
> Logs are written only on the master node. Slaves don't have any logs. And
> when I ran a program it said:
>
> Resources available to scheduler: Number of instances=0, total number of
> slots=0, available slots=0
>
> Can anyone help please?
>
> --
> Thank You
>
> Regards
>
> Punit Naik
>