You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@vcl.apache.org by Mike Haudenschild <mi...@longsight.com> on 2012/02/29 19:41:03 UTC

Dev feedback requested, management commands sent to Cygwin

Good afternoon, devs --

I've been experiencing slower than expected reservation provisioning times
on a VCL infrastructure that uses a SAN for all storage (all ESXi, on
blades).  I first noticed it when I'd click the "Connect!" button on a
reservation and the RDP connection wouldn't open the first time.
 Restarting the RDP client 15-30 seconds later, the connection would
succeed.

Watching vcld.log, I found that connecting to the Cygwin shell from the
management node was taking 6-10 seconds (whereas the same connections on
servers using local-disk storage take 1-2 seconds).  I can replicate the
behavior running ssh -i /etc/vcl/vcl.key <target machine> from the
management node.

It really hit home when I started a bash shell LOCALLY (with bash --login
-i -x) on a target Windows VM and watched how long it took just to get to a
bash prompt.  Each of the startup scripts took a long time.  (I'm not
running bash-completion, a common complaint about slow Cygwin shell
startups.)

I *think* -- requesting confirmation of this -- that each time the
management node wants to issue a command to a remote computer it initiates
a new SSH connection, then closes that connection when the command finishes
processing.  Is that accurate?  If so, that would mean that those 6-10
seconds would be compounded several times over while the management node
prepares the remote computer for my reservation.  I'm currently
investigating moving Cygwin into a RAMdisk on the VM images, but that only
makes sense if the above assumption about multiple SSH sessions is accurate.

The latency on the SAN connection is very low, and ESXi reports that
latencies on the virtural disks are slow.  I have /etc/hosts set up, DNS
resolves fine, and pings between the management node and VMs are fine.

Has anyone else run into any similar behavior with Cygwin?

Many thanks,
Mike

--
*Mike Haudenschild*
Education Systems Manager
Longsight Group
(740) 599-5005 x809
mike@longsight.com
www.longsight.com

Re: Dev feedback requested, management commands sent to Cygwin

Posted by Andy Kurth <an...@ncsu.edu>.
On Thu, Mar 1, 2012 at 10:34 AM, Mike Haudenschild <mi...@longsight.com> wrote:
> Hi Andy,
>
> Thanks for these suggestions.  When I connect to the target using the -vvv
> option, it stalls in two places: when using /etc/vcl/vcl.key, and
> immediately before showing the shell prompt.

I'd play around with the ssh_config and sshd_config settings.  To see
the connection information from the host your connecting to's
perspective, edit /etc/sshd_config and set "LogLevel DEBUG3".  Stop
the sshd service and then run the following from a Cygwin window:
/usr/sbin/sshd.exe -d

If you only see slowness on some images waiting for the shell prompt,
check the /home/root/.bash* and other scripts under /etc which
automatically get run.  You may find some commands in them which are
slowing things down which could be commented out.  For example, one of
the scripts runs chmod on everything under /tmp.

> I'm currently running the 2.2.1 release of the management node code.  I'm
> assuming that the changes you noted would be made starting with the code
> from trunk?  If so, are there any problems running the management node code
> from trunk with stock 2.2.1 front-end code?

Yes, trunk.  The backend code is pretty stable and will work.  You
should be able to update it without updating the web code.  I haven't
tried running the web code from trunk so I'm not sure if there are any
issues.

-Andy

> Thanks for your help!
>
> Mike
>
>
> On Thu, Mar 1, 2012 at 08:51, Andy Kurth <an...@ncsu.edu> wrote:
>
>> I have seen similar slowness.  Even on a pretty powerful workstation,
>> a Cygwin shell can take a few seconds to open.  If SSH is taking
>> significantly longer than running Cygwin locally, you can run 'ssh
>> -vvv' to try to figure out which stage in the SSH connection sequence
>> is taking the most time.  You may be able to tweak the sshd_config
>> settings on the target.  Make sure UseDNS is set to no.
>>
>> Yes, vcld currently establishes a new connection for every SSH call.
>> Things should be quicker with VCL 2.3.  I added code to establish a
>> single SSH connection and then pass commands to it using
>> Net::SSH::Expect.  This code is in the OS.pm::execute_new()
>> subroutine.  The code in the repository is not currently configured to
>> call this but you can test it by uncommenting the following lines:
>>
>> In OS.pm::execute().
>> #return execute_new(@_);
>>
>> In utils.pm::run_ssh_command:
>> #return VCL::Module::OS::execute($node, $command, $output_level,
>> $timeout_seconds, $max_attempts, $port, $user);
>>
>> Also, you'll need to run install_perl_libs.pl or manually install
>> Net::SSH::Expect.
>>
>> -Andy
>>
>> On Wed, Feb 29, 2012 at 5:59 PM, James O'Dell <jo...@fullerton.edu>
>> wrote:
>> > -----BEGIN PGP SIGNED MESSAGE-----
>> > Hash: SHA1
>> >
>> >
>> > Afaik the vcld process normally starts a new connection each
>> > time it wants to do something on the client.
>> >
>> > It may batch a few commands during a single session - like when
>> > it creates a user. But for most part it creates a new connection
>> > each time.
>> >
>> > On 2/29/2012 2:34 PM, Mike Haudenschild wrote:
>> >> Hi Jim,
>> >>
>> >> I'm working from a new image, so (unfortunately?) Cygwin is already the
>> >> latest version.
>> >>
>> >> I've disabled many of the default items in /etc/profile to try and
>> speed up
>> >> initial connections.  Do you happen to know if the management node
>> >> opens/closes the SSH connection for each command it issues, or uses the
>> >> same SSH session for multiple commands?
>> >>
>> >> Many thanks,
>> >> Mike
>> >>
>> >>
>> >> On Wed, Feb 29, 2012 at 14:42, James O'Dell <jo...@fullerton.edu>
>> wrote:
>> >>
>> >> Hey Mike,
>> >>
>> >> I had a similar issue with Cygwin.
>> >>
>> >> Basically, I installed Cygwin when I created the image. Over
>> >> the course of time I patched the OS, but not Cygwin.
>> >> This caused problems.
>> >>
>> >> I eventually removed Cygwin, and reinstalled it (had to
>> >> re-run the Cygwin patch up stuff again).
>> >>
>> >> (Pay close attention to who you are actually 'logged in as'
>> >> when installing Cygwin. I think I needed to be 'root')
>> >>
>> >> Anyway, a reinstall of Cygwin helped me out.
>> >>
>> >> Hope this helps,
>> >>
>> >> __Jim
>> >>
>> >>
>> >> On 2/29/2012 10:41 AM, Mike Haudenschild wrote:
>> >>>>> Good afternoon, devs --
>> >>>>>
>> >>>>> I've been experiencing slower than expected reservation provisioning
>> >> times
>> >>>>> on a VCL infrastructure that uses a SAN for all storage (all ESXi, on
>> >>>>> blades).  I first noticed it when I'd click the "Connect!" button on
>> a
>> >>>>> reservation and the RDP connection wouldn't open the first time.
>> >>>>>  Restarting the RDP client 15-30 seconds later, the connection would
>> >>>>> succeed.
>> >>>>>
>> >>>>> Watching vcld.log, I found that connecting to the Cygwin shell from
>> the
>> >>>>> management node was taking 6-10 seconds (whereas the same
>> connections on
>> >>>>> servers using local-disk storage take 1-2 seconds).  I can replicate
>> the
>> >>>>> behavior running ssh -i /etc/vcl/vcl.key <target machine> from the
>> >>>>> management node.
>> >>>>>
>> >>>>> It really hit home when I started a bash shell LOCALLY (with bash
>> --login
>> >>>>> -i -x) on a target Windows VM and watched how long it took just to
>> get
>> >> to a
>> >>>>> bash prompt.  Each of the startup scripts took a long time.  (I'm not
>> >>>>> running bash-completion, a common complaint about slow Cygwin shell
>> >>>>> startups.)
>> >>>>>
>> >>>>> I *think* -- requesting confirmation of this -- that each time the
>> >>>>> management node wants to issue a command to a remote computer it
>> >> initiates
>> >>>>> a new SSH connection, then closes that connection when the command
>> >> finishes
>> >>>>> processing.  Is that accurate?  If so, that would mean that those
>> 6-10
>> >>>>> seconds would be compounded several times over while the management
>> node
>> >>>>> prepares the remote computer for my reservation.  I'm currently
>> >>>>> investigating moving Cygwin into a RAMdisk on the VM images, but that
>> >> only
>> >>>>> makes sense if the above assumption about multiple SSH sessions is
>> >> accurate.
>> >>>>>
>> >>>>> The latency on the SAN connection is very low, and ESXi reports that
>> >>>>> latencies on the virtural disks are slow.  I have /etc/hosts set up,
>> DNS
>> >>>>> resolves fine, and pings between the management node and VMs are
>> fine.
>> >>>>>
>> >>>>> Has anyone else run into any similar behavior with Cygwin?
>> >>>>>
>> >>>>> Many thanks,
>> >>>>> Mike
>> >>>>>
>> >>>>> --
>> >>>>> *Mike Haudenschild*
>> >>>>> Education Systems Manager
>> >>>>> Longsight Group
>> >>>>> (740) 599-5005 x809
>> >>>>> mike@longsight.com
>> >>>>> www.longsight.com
>> >>>>>
>> >>
>> >>
>> >>>
>> >>
>> >
>> > - --
>> > Jim O'Dell
>> > Network Analyst
>> > California State University Fullerton
>> > Email: jodell@fullerton.edu
>> > Phone: (657) 278-2256
>> > -----BEGIN PGP SIGNATURE-----
>> > Version: GnuPG v1.4.9 (MingW32)
>> > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>> >
>> > iEYEARECAAYFAk9Orc4ACgkQREVHAOnXPYRK4gCfZAUN9WqrP8RJQ3SdukIcs9I8
>> > 6PsAoKa3Q1f8hzkoiJo8hx8N6nwWOcaV
>> > =7AEx
>> > -----END PGP SIGNATURE-----
>>

Re: Dev feedback requested, management commands sent to Cygwin

Posted by Mike Haudenschild <mi...@longsight.com>.
Hi Andy,

Thanks for these suggestions.  When I connect to the target using the -vvv
option, it stalls in two places: when using /etc/vcl/vcl.key, and
immediately before showing the shell prompt.

I'm currently running the 2.2.1 release of the management node code.  I'm
assuming that the changes you noted would be made starting with the code
from trunk?  If so, are there any problems running the management node code
from trunk with stock 2.2.1 front-end code?

Thanks for your help!

Mike


On Thu, Mar 1, 2012 at 08:51, Andy Kurth <an...@ncsu.edu> wrote:

> I have seen similar slowness.  Even on a pretty powerful workstation,
> a Cygwin shell can take a few seconds to open.  If SSH is taking
> significantly longer than running Cygwin locally, you can run 'ssh
> -vvv' to try to figure out which stage in the SSH connection sequence
> is taking the most time.  You may be able to tweak the sshd_config
> settings on the target.  Make sure UseDNS is set to no.
>
> Yes, vcld currently establishes a new connection for every SSH call.
> Things should be quicker with VCL 2.3.  I added code to establish a
> single SSH connection and then pass commands to it using
> Net::SSH::Expect.  This code is in the OS.pm::execute_new()
> subroutine.  The code in the repository is not currently configured to
> call this but you can test it by uncommenting the following lines:
>
> In OS.pm::execute().
> #return execute_new(@_);
>
> In utils.pm::run_ssh_command:
> #return VCL::Module::OS::execute($node, $command, $output_level,
> $timeout_seconds, $max_attempts, $port, $user);
>
> Also, you'll need to run install_perl_libs.pl or manually install
> Net::SSH::Expect.
>
> -Andy
>
> On Wed, Feb 29, 2012 at 5:59 PM, James O'Dell <jo...@fullerton.edu>
> wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA1
> >
> >
> > Afaik the vcld process normally starts a new connection each
> > time it wants to do something on the client.
> >
> > It may batch a few commands during a single session - like when
> > it creates a user. But for most part it creates a new connection
> > each time.
> >
> > On 2/29/2012 2:34 PM, Mike Haudenschild wrote:
> >> Hi Jim,
> >>
> >> I'm working from a new image, so (unfortunately?) Cygwin is already the
> >> latest version.
> >>
> >> I've disabled many of the default items in /etc/profile to try and
> speed up
> >> initial connections.  Do you happen to know if the management node
> >> opens/closes the SSH connection for each command it issues, or uses the
> >> same SSH session for multiple commands?
> >>
> >> Many thanks,
> >> Mike
> >>
> >>
> >> On Wed, Feb 29, 2012 at 14:42, James O'Dell <jo...@fullerton.edu>
> wrote:
> >>
> >> Hey Mike,
> >>
> >> I had a similar issue with Cygwin.
> >>
> >> Basically, I installed Cygwin when I created the image. Over
> >> the course of time I patched the OS, but not Cygwin.
> >> This caused problems.
> >>
> >> I eventually removed Cygwin, and reinstalled it (had to
> >> re-run the Cygwin patch up stuff again).
> >>
> >> (Pay close attention to who you are actually 'logged in as'
> >> when installing Cygwin. I think I needed to be 'root')
> >>
> >> Anyway, a reinstall of Cygwin helped me out.
> >>
> >> Hope this helps,
> >>
> >> __Jim
> >>
> >>
> >> On 2/29/2012 10:41 AM, Mike Haudenschild wrote:
> >>>>> Good afternoon, devs --
> >>>>>
> >>>>> I've been experiencing slower than expected reservation provisioning
> >> times
> >>>>> on a VCL infrastructure that uses a SAN for all storage (all ESXi, on
> >>>>> blades).  I first noticed it when I'd click the "Connect!" button on
> a
> >>>>> reservation and the RDP connection wouldn't open the first time.
> >>>>>  Restarting the RDP client 15-30 seconds later, the connection would
> >>>>> succeed.
> >>>>>
> >>>>> Watching vcld.log, I found that connecting to the Cygwin shell from
> the
> >>>>> management node was taking 6-10 seconds (whereas the same
> connections on
> >>>>> servers using local-disk storage take 1-2 seconds).  I can replicate
> the
> >>>>> behavior running ssh -i /etc/vcl/vcl.key <target machine> from the
> >>>>> management node.
> >>>>>
> >>>>> It really hit home when I started a bash shell LOCALLY (with bash
> --login
> >>>>> -i -x) on a target Windows VM and watched how long it took just to
> get
> >> to a
> >>>>> bash prompt.  Each of the startup scripts took a long time.  (I'm not
> >>>>> running bash-completion, a common complaint about slow Cygwin shell
> >>>>> startups.)
> >>>>>
> >>>>> I *think* -- requesting confirmation of this -- that each time the
> >>>>> management node wants to issue a command to a remote computer it
> >> initiates
> >>>>> a new SSH connection, then closes that connection when the command
> >> finishes
> >>>>> processing.  Is that accurate?  If so, that would mean that those
> 6-10
> >>>>> seconds would be compounded several times over while the management
> node
> >>>>> prepares the remote computer for my reservation.  I'm currently
> >>>>> investigating moving Cygwin into a RAMdisk on the VM images, but that
> >> only
> >>>>> makes sense if the above assumption about multiple SSH sessions is
> >> accurate.
> >>>>>
> >>>>> The latency on the SAN connection is very low, and ESXi reports that
> >>>>> latencies on the virtural disks are slow.  I have /etc/hosts set up,
> DNS
> >>>>> resolves fine, and pings between the management node and VMs are
> fine.
> >>>>>
> >>>>> Has anyone else run into any similar behavior with Cygwin?
> >>>>>
> >>>>> Many thanks,
> >>>>> Mike
> >>>>>
> >>>>> --
> >>>>> *Mike Haudenschild*
> >>>>> Education Systems Manager
> >>>>> Longsight Group
> >>>>> (740) 599-5005 x809
> >>>>> mike@longsight.com
> >>>>> www.longsight.com
> >>>>>
> >>
> >>
> >>>
> >>
> >
> > - --
> > Jim O'Dell
> > Network Analyst
> > California State University Fullerton
> > Email: jodell@fullerton.edu
> > Phone: (657) 278-2256
> > -----BEGIN PGP SIGNATURE-----
> > Version: GnuPG v1.4.9 (MingW32)
> > Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> >
> > iEYEARECAAYFAk9Orc4ACgkQREVHAOnXPYRK4gCfZAUN9WqrP8RJQ3SdukIcs9I8
> > 6PsAoKa3Q1f8hzkoiJo8hx8N6nwWOcaV
> > =7AEx
> > -----END PGP SIGNATURE-----
>

Re: Dev feedback requested, management commands sent to Cygwin

Posted by Andy Kurth <an...@ncsu.edu>.
I have seen similar slowness.  Even on a pretty powerful workstation,
a Cygwin shell can take a few seconds to open.  If SSH is taking
significantly longer than running Cygwin locally, you can run 'ssh
-vvv' to try to figure out which stage in the SSH connection sequence
is taking the most time.  You may be able to tweak the sshd_config
settings on the target.  Make sure UseDNS is set to no.

Yes, vcld currently establishes a new connection for every SSH call.
Things should be quicker with VCL 2.3.  I added code to establish a
single SSH connection and then pass commands to it using
Net::SSH::Expect.  This code is in the OS.pm::execute_new()
subroutine.  The code in the repository is not currently configured to
call this but you can test it by uncommenting the following lines:

In OS.pm::execute().
#return execute_new(@_);

In utils.pm::run_ssh_command:
#return VCL::Module::OS::execute($node, $command, $output_level,
$timeout_seconds, $max_attempts, $port, $user);

Also, you'll need to run install_perl_libs.pl or manually install
Net::SSH::Expect.

-Andy

On Wed, Feb 29, 2012 at 5:59 PM, James O'Dell <jo...@fullerton.edu> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
> Afaik the vcld process normally starts a new connection each
> time it wants to do something on the client.
>
> It may batch a few commands during a single session - like when
> it creates a user. But for most part it creates a new connection
> each time.
>
> On 2/29/2012 2:34 PM, Mike Haudenschild wrote:
>> Hi Jim,
>>
>> I'm working from a new image, so (unfortunately?) Cygwin is already the
>> latest version.
>>
>> I've disabled many of the default items in /etc/profile to try and speed up
>> initial connections.  Do you happen to know if the management node
>> opens/closes the SSH connection for each command it issues, or uses the
>> same SSH session for multiple commands?
>>
>> Many thanks,
>> Mike
>>
>>
>> On Wed, Feb 29, 2012 at 14:42, James O'Dell <jo...@fullerton.edu> wrote:
>>
>> Hey Mike,
>>
>> I had a similar issue with Cygwin.
>>
>> Basically, I installed Cygwin when I created the image. Over
>> the course of time I patched the OS, but not Cygwin.
>> This caused problems.
>>
>> I eventually removed Cygwin, and reinstalled it (had to
>> re-run the Cygwin patch up stuff again).
>>
>> (Pay close attention to who you are actually 'logged in as'
>> when installing Cygwin. I think I needed to be 'root')
>>
>> Anyway, a reinstall of Cygwin helped me out.
>>
>> Hope this helps,
>>
>> __Jim
>>
>>
>> On 2/29/2012 10:41 AM, Mike Haudenschild wrote:
>>>>> Good afternoon, devs --
>>>>>
>>>>> I've been experiencing slower than expected reservation provisioning
>> times
>>>>> on a VCL infrastructure that uses a SAN for all storage (all ESXi, on
>>>>> blades).  I first noticed it when I'd click the "Connect!" button on a
>>>>> reservation and the RDP connection wouldn't open the first time.
>>>>>  Restarting the RDP client 15-30 seconds later, the connection would
>>>>> succeed.
>>>>>
>>>>> Watching vcld.log, I found that connecting to the Cygwin shell from the
>>>>> management node was taking 6-10 seconds (whereas the same connections on
>>>>> servers using local-disk storage take 1-2 seconds).  I can replicate the
>>>>> behavior running ssh -i /etc/vcl/vcl.key <target machine> from the
>>>>> management node.
>>>>>
>>>>> It really hit home when I started a bash shell LOCALLY (with bash --login
>>>>> -i -x) on a target Windows VM and watched how long it took just to get
>> to a
>>>>> bash prompt.  Each of the startup scripts took a long time.  (I'm not
>>>>> running bash-completion, a common complaint about slow Cygwin shell
>>>>> startups.)
>>>>>
>>>>> I *think* -- requesting confirmation of this -- that each time the
>>>>> management node wants to issue a command to a remote computer it
>> initiates
>>>>> a new SSH connection, then closes that connection when the command
>> finishes
>>>>> processing.  Is that accurate?  If so, that would mean that those 6-10
>>>>> seconds would be compounded several times over while the management node
>>>>> prepares the remote computer for my reservation.  I'm currently
>>>>> investigating moving Cygwin into a RAMdisk on the VM images, but that
>> only
>>>>> makes sense if the above assumption about multiple SSH sessions is
>> accurate.
>>>>>
>>>>> The latency on the SAN connection is very low, and ESXi reports that
>>>>> latencies on the virtural disks are slow.  I have /etc/hosts set up, DNS
>>>>> resolves fine, and pings between the management node and VMs are fine.
>>>>>
>>>>> Has anyone else run into any similar behavior with Cygwin?
>>>>>
>>>>> Many thanks,
>>>>> Mike
>>>>>
>>>>> --
>>>>> *Mike Haudenschild*
>>>>> Education Systems Manager
>>>>> Longsight Group
>>>>> (740) 599-5005 x809
>>>>> mike@longsight.com
>>>>> www.longsight.com
>>>>>
>>
>>
>>>
>>
>
> - --
> Jim O'Dell
> Network Analyst
> California State University Fullerton
> Email: jodell@fullerton.edu
> Phone: (657) 278-2256
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (MingW32)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAk9Orc4ACgkQREVHAOnXPYRK4gCfZAUN9WqrP8RJQ3SdukIcs9I8
> 6PsAoKa3Q1f8hzkoiJo8hx8N6nwWOcaV
> =7AEx
> -----END PGP SIGNATURE-----

Re: Dev feedback requested, management commands sent to Cygwin

Posted by James O'Dell <jo...@fullerton.edu>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


Afaik the vcld process normally starts a new connection each
time it wants to do something on the client.

It may batch a few commands during a single session - like when
it creates a user. But for most part it creates a new connection
each time.

On 2/29/2012 2:34 PM, Mike Haudenschild wrote:
> Hi Jim,
> 
> I'm working from a new image, so (unfortunately?) Cygwin is already the
> latest version.
> 
> I've disabled many of the default items in /etc/profile to try and speed up
> initial connections.  Do you happen to know if the management node
> opens/closes the SSH connection for each command it issues, or uses the
> same SSH session for multiple commands?
> 
> Many thanks,
> Mike
> 
> 
> On Wed, Feb 29, 2012 at 14:42, James O'Dell <jo...@fullerton.edu> wrote:
> 
> Hey Mike,
> 
> I had a similar issue with Cygwin.
> 
> Basically, I installed Cygwin when I created the image. Over
> the course of time I patched the OS, but not Cygwin.
> This caused problems.
> 
> I eventually removed Cygwin, and reinstalled it (had to
> re-run the Cygwin patch up stuff again).
> 
> (Pay close attention to who you are actually 'logged in as'
> when installing Cygwin. I think I needed to be 'root')
> 
> Anyway, a reinstall of Cygwin helped me out.
> 
> Hope this helps,
> 
> __Jim
> 
> 
> On 2/29/2012 10:41 AM, Mike Haudenschild wrote:
>>>> Good afternoon, devs --
>>>>
>>>> I've been experiencing slower than expected reservation provisioning
> times
>>>> on a VCL infrastructure that uses a SAN for all storage (all ESXi, on
>>>> blades).  I first noticed it when I'd click the "Connect!" button on a
>>>> reservation and the RDP connection wouldn't open the first time.
>>>>  Restarting the RDP client 15-30 seconds later, the connection would
>>>> succeed.
>>>>
>>>> Watching vcld.log, I found that connecting to the Cygwin shell from the
>>>> management node was taking 6-10 seconds (whereas the same connections on
>>>> servers using local-disk storage take 1-2 seconds).  I can replicate the
>>>> behavior running ssh -i /etc/vcl/vcl.key <target machine> from the
>>>> management node.
>>>>
>>>> It really hit home when I started a bash shell LOCALLY (with bash --login
>>>> -i -x) on a target Windows VM and watched how long it took just to get
> to a
>>>> bash prompt.  Each of the startup scripts took a long time.  (I'm not
>>>> running bash-completion, a common complaint about slow Cygwin shell
>>>> startups.)
>>>>
>>>> I *think* -- requesting confirmation of this -- that each time the
>>>> management node wants to issue a command to a remote computer it
> initiates
>>>> a new SSH connection, then closes that connection when the command
> finishes
>>>> processing.  Is that accurate?  If so, that would mean that those 6-10
>>>> seconds would be compounded several times over while the management node
>>>> prepares the remote computer for my reservation.  I'm currently
>>>> investigating moving Cygwin into a RAMdisk on the VM images, but that
> only
>>>> makes sense if the above assumption about multiple SSH sessions is
> accurate.
>>>>
>>>> The latency on the SAN connection is very low, and ESXi reports that
>>>> latencies on the virtural disks are slow.  I have /etc/hosts set up, DNS
>>>> resolves fine, and pings between the management node and VMs are fine.
>>>>
>>>> Has anyone else run into any similar behavior with Cygwin?
>>>>
>>>> Many thanks,
>>>> Mike
>>>>
>>>> --
>>>> *Mike Haudenschild*
>>>> Education Systems Manager
>>>> Longsight Group
>>>> (740) 599-5005 x809
>>>> mike@longsight.com
>>>> www.longsight.com
>>>>
> 
> 
>>
> 

- -- 
Jim O'Dell
Network Analyst
California State University Fullerton
Email: jodell@fullerton.edu
Phone: (657) 278-2256
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk9Orc4ACgkQREVHAOnXPYRK4gCfZAUN9WqrP8RJQ3SdukIcs9I8
6PsAoKa3Q1f8hzkoiJo8hx8N6nwWOcaV
=7AEx
-----END PGP SIGNATURE-----

Re: Dev feedback requested, management commands sent to Cygwin

Posted by Mike Haudenschild <mi...@longsight.com>.
Hi Jim,

I'm working from a new image, so (unfortunately?) Cygwin is already the
latest version.

I've disabled many of the default items in /etc/profile to try and speed up
initial connections.  Do you happen to know if the management node
opens/closes the SSH connection for each command it issues, or uses the
same SSH session for multiple commands?

Many thanks,
Mike


On Wed, Feb 29, 2012 at 14:42, James O'Dell <jo...@fullerton.edu> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Hey Mike,
>
> I had a similar issue with Cygwin.
>
> Basically, I installed Cygwin when I created the image. Over
> the course of time I patched the OS, but not Cygwin.
> This caused problems.
>
> I eventually removed Cygwin, and reinstalled it (had to
> re-run the Cygwin patch up stuff again).
>
> (Pay close attention to who you are actually 'logged in as'
> when installing Cygwin. I think I needed to be 'root')
>
> Anyway, a reinstall of Cygwin helped me out.
>
> Hope this helps,
>
> __Jim
>
>
> On 2/29/2012 10:41 AM, Mike Haudenschild wrote:
> > Good afternoon, devs --
> >
> > I've been experiencing slower than expected reservation provisioning
> times
> > on a VCL infrastructure that uses a SAN for all storage (all ESXi, on
> > blades).  I first noticed it when I'd click the "Connect!" button on a
> > reservation and the RDP connection wouldn't open the first time.
> >  Restarting the RDP client 15-30 seconds later, the connection would
> > succeed.
> >
> > Watching vcld.log, I found that connecting to the Cygwin shell from the
> > management node was taking 6-10 seconds (whereas the same connections on
> > servers using local-disk storage take 1-2 seconds).  I can replicate the
> > behavior running ssh -i /etc/vcl/vcl.key <target machine> from the
> > management node.
> >
> > It really hit home when I started a bash shell LOCALLY (with bash --login
> > -i -x) on a target Windows VM and watched how long it took just to get
> to a
> > bash prompt.  Each of the startup scripts took a long time.  (I'm not
> > running bash-completion, a common complaint about slow Cygwin shell
> > startups.)
> >
> > I *think* -- requesting confirmation of this -- that each time the
> > management node wants to issue a command to a remote computer it
> initiates
> > a new SSH connection, then closes that connection when the command
> finishes
> > processing.  Is that accurate?  If so, that would mean that those 6-10
> > seconds would be compounded several times over while the management node
> > prepares the remote computer for my reservation.  I'm currently
> > investigating moving Cygwin into a RAMdisk on the VM images, but that
> only
> > makes sense if the above assumption about multiple SSH sessions is
> accurate.
> >
> > The latency on the SAN connection is very low, and ESXi reports that
> > latencies on the virtural disks are slow.  I have /etc/hosts set up, DNS
> > resolves fine, and pings between the management node and VMs are fine.
> >
> > Has anyone else run into any similar behavior with Cygwin?
> >
> > Many thanks,
> > Mike
> >
> > --
> > *Mike Haudenschild*
> > Education Systems Manager
> > Longsight Group
> > (740) 599-5005 x809
> > mike@longsight.com
> > www.longsight.com
> >
>
>
> - --
> Jim O'Dell
> Network Analyst
> California State University Fullerton
> Email: jodell@fullerton.edu
> Phone: (657) 278-2256
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (MingW32)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
>
> iEYEARECAAYFAk9Of5IACgkQREVHAOnXPYQu1QCfTcjbu1cat6jOpNRGGPGwlPsz
> TYkAn0smAWtrocgCiw2RssbShsm4b2gP
> =/8Pd
> -----END PGP SIGNATURE-----
>

Re: Dev feedback requested, management commands sent to Cygwin

Posted by James O'Dell <jo...@fullerton.edu>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hey Mike,

I had a similar issue with Cygwin.

Basically, I installed Cygwin when I created the image. Over
the course of time I patched the OS, but not Cygwin.
This caused problems.

I eventually removed Cygwin, and reinstalled it (had to
re-run the Cygwin patch up stuff again).

(Pay close attention to who you are actually 'logged in as'
when installing Cygwin. I think I needed to be 'root')

Anyway, a reinstall of Cygwin helped me out.

Hope this helps,

__Jim


On 2/29/2012 10:41 AM, Mike Haudenschild wrote:
> Good afternoon, devs --
> 
> I've been experiencing slower than expected reservation provisioning times
> on a VCL infrastructure that uses a SAN for all storage (all ESXi, on
> blades).  I first noticed it when I'd click the "Connect!" button on a
> reservation and the RDP connection wouldn't open the first time.
>  Restarting the RDP client 15-30 seconds later, the connection would
> succeed.
> 
> Watching vcld.log, I found that connecting to the Cygwin shell from the
> management node was taking 6-10 seconds (whereas the same connections on
> servers using local-disk storage take 1-2 seconds).  I can replicate the
> behavior running ssh -i /etc/vcl/vcl.key <target machine> from the
> management node.
> 
> It really hit home when I started a bash shell LOCALLY (with bash --login
> -i -x) on a target Windows VM and watched how long it took just to get to a
> bash prompt.  Each of the startup scripts took a long time.  (I'm not
> running bash-completion, a common complaint about slow Cygwin shell
> startups.)
> 
> I *think* -- requesting confirmation of this -- that each time the
> management node wants to issue a command to a remote computer it initiates
> a new SSH connection, then closes that connection when the command finishes
> processing.  Is that accurate?  If so, that would mean that those 6-10
> seconds would be compounded several times over while the management node
> prepares the remote computer for my reservation.  I'm currently
> investigating moving Cygwin into a RAMdisk on the VM images, but that only
> makes sense if the above assumption about multiple SSH sessions is accurate.
> 
> The latency on the SAN connection is very low, and ESXi reports that
> latencies on the virtural disks are slow.  I have /etc/hosts set up, DNS
> resolves fine, and pings between the management node and VMs are fine.
> 
> Has anyone else run into any similar behavior with Cygwin?
> 
> Many thanks,
> Mike
> 
> --
> *Mike Haudenschild*
> Education Systems Manager
> Longsight Group
> (740) 599-5005 x809
> mike@longsight.com
> www.longsight.com
> 


- -- 
Jim O'Dell
Network Analyst
California State University Fullerton
Email: jodell@fullerton.edu
Phone: (657) 278-2256
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk9Of5IACgkQREVHAOnXPYQu1QCfTcjbu1cat6jOpNRGGPGwlPsz
TYkAn0smAWtrocgCiw2RssbShsm4b2gP
=/8Pd
-----END PGP SIGNATURE-----