You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by manya cancerian <ma...@gmail.com> on 2015/08/05 08:35:43 UTC
Exception while submitting spark job using Yarn
hi Guys,
I am trying to run Zeppelin using Yarn as resource manager. I have made
following changes
1- I have specified master as 'yarn-client' in the interpreter settings
using UI
2. I have specified HADOOP_CONF_DIR as conf directory containing hadoop
configuration files
In my scenario I have three machines.
a- Client Machine where zeppelin is installed
b- Machine where YARN resource manager is running along with hadoop cluster
namenode and datanode
c- Machine running data node
When I submit job from my client machine , it gets submitted to yarn but
fails with following exception -
5/08/04 15:08:05 ERROR yarn.ApplicationMaster: Uncaught exception:
org.apache.spark.SparkException: Failed to connect to driver!
at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:424)
at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:284)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:146)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:575)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:573)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:596)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
15/08/04 15:08:05 INFO yarn.ApplicationMaster: Final app status:
FAILED, exitCode: 10, (reason: Uncaught exception: Failed to connect
to driver!)
Any help is much appreciated!
Regards
Manya
RE: Exception while submitting spark job using Yarn
Posted by "Vadla, Karthik" <ka...@intel.com>.
Hi Naveen,
This will help you as a startup guide to setup zeppelin.
http://blog.cloudera.com/blog/2015/07/how-to-install-apache-zeppelin-on-cdh/
Thanks
Karthik Vadla
From: Naveenkumar GP [mailto:Naveenkumar_Gp@infosys.com]
Sent: Wednesday, August 5, 2015 5:06 AM
To: users@zeppelin.incubator.apache.org
Subject: RE: Exception while submitting spark job using Yarn
No how to do that one..
From: Todd Nist [mailto:tsindotg@gmail.com]
Sent: Wednesday, August 05, 2015 5:34 PM
To: users@zeppelin.incubator.apache.org<ma...@zeppelin.incubator.apache.org>
Subject: Re: Exception while submitting spark job using Yarn
Have you built Zeppelin with against the version of Hadoop & Spark you are using? It has to be build with the appropriate versions as this will pull in the required libraries from Hadoop and Spark. By default Zeppelin will not work on Yarn with out doing the build.
@deepujain posted a fairly comprehensive guide on the forum to follow with
the steps to take to deploy under Hadoop and Yarn. It was posted yesterday,
August 4th.
HTH.
-Todd
On Wed, Aug 5, 2015 at 2:35 AM, manya cancerian <ma...@gmail.com>> wrote:
hi Guys,
I am trying to run Zeppelin using Yarn as resource manager. I have made following changes
1- I have specified master as 'yarn-client' in the interpreter settings using UI
2. I have specified HADOOP_CONF_DIR as conf directory containing hadoop configuration files
In my scenario I have three machines.
a- Client Machine where zeppelin is installed
b- Machine where YARN resource manager is running along with hadoop cluster namenode and datanode
c- Machine running data node
When I submit job from my client machine , it gets submitted to yarn but fails with following exception -
5/08/04 15:08:05 ERROR yarn.ApplicationMaster: Uncaught exception:
org.apache.spark.SparkException: Failed to connect to driver!
at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:424)
at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:284)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:146)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:575)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:573)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:596)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
15/08/04 15:08:05 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: Failed to connect to driver!)
Any help is much appreciated!
Regards
Manya
**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***
Re: Exception while submitting spark job using Yarn
Posted by Todd Nist <ts...@gmail.com>.
@Naveen,
This email thread has the steps to follow:
https://mail.google.com/mail/u/0/?ui=2&ik=89501a9ed8&view=lg&msg=14ef946743652b50
Along with these from the specific vendors depending on your Hadoop
Installation:
Cloudera:
https://mail.google.com/mail/u/0/?ui=2&ik=89501a9ed8&view=lg&msg=14ef946743652b50
https://software.intel.com/sites/default/files/managed/bb/bf/Apache-Zeppelin.pdf
Hortonworks has its own howto post here:
http://hortonworks.com/blog/introduction-to-data-science-with-apache-spark/
-Todd
On Wed, Aug 5, 2015 at 9:31 AM, manya cancerian <ma...@gmail.com>
wrote:
> hey guys, resolved the issue...there was an entry in /etc/hosts file with
> localhost due to which YARN was trying to connect to spark driver on
> localhost of client machine.
>
> Once the entry was removed , it picked the hostname and was able to
> connect.
>
> Thanks Jongyoul Lee, Todd Nist for help...this forum is really great ..
>
> Naveen, I will try to post some simple steps which I followed for
> configuring zeppelin with Yarn most probably tomorrow.
>
>
> Thanks
> Manya
>
> On Wed, Aug 5, 2015 at 5:35 PM, Naveenkumar GP <Naveenkumar_Gp@infosys.com
> > wrote:
>
>> No how to do that one..
>>
>>
>>
>> *From:* Todd Nist [mailto:tsindotg@gmail.com]
>> *Sent:* Wednesday, August 05, 2015 5:34 PM
>> *To:* users@zeppelin.incubator.apache.org
>> *Subject:* Re: Exception while submitting spark job using Yarn
>>
>>
>>
>> Have you built Zeppelin with against the version of Hadoop & Spark you
>> are using? It has to be build with the appropriate versions as this will
>> pull in the required libraries from Hadoop and Spark. By default Zeppelin
>> will not work on Yarn with out doing the build.
>>
>> @deepujain posted a fairly comprehensive guide on the forum to follow
>> with
>>
>> the steps to take to deploy under Hadoop and Yarn. It was posted
>> yesterday,
>>
>> August 4th.
>>
>>
>>
>> HTH.
>>
>>
>>
>> -Todd
>>
>>
>>
>>
>>
>> On Wed, Aug 5, 2015 at 2:35 AM, manya cancerian <ma...@gmail.com>
>> wrote:
>>
>> hi Guys,
>>
>>
>>
>> I am trying to run Zeppelin using Yarn as resource manager. I have made
>> following changes
>>
>>
>>
>> 1- I have specified master as 'yarn-client' in the interpreter settings
>> using UI
>>
>> 2. I have specified HADOOP_CONF_DIR as conf directory containing hadoop
>> configuration files
>>
>>
>>
>> In my scenario I have three machines.
>>
>> a- Client Machine where zeppelin is installed
>>
>> b- Machine where YARN resource manager is running along with hadoop
>> cluster namenode and datanode
>>
>> c- Machine running data node
>>
>>
>>
>>
>>
>> When I submit job from my client machine , it gets submitted to yarn but
>> fails with following exception -
>>
>>
>>
>>
>>
>> 5/08/04 15:08:05 ERROR yarn.ApplicationMaster: Uncaught exception:
>>
>> org.apache.spark.SparkException: Failed to connect to driver!
>>
>> at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:424)
>>
>> at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:284)
>>
>> at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:146)
>>
>> at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:575)
>>
>> at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
>>
>> at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
>>
>> at java.security.AccessController.doPrivileged(Native Method)
>>
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>
>> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>
>> at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
>>
>> at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:573)
>>
>> at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:596)
>>
>> at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
>>
>> 15/08/04 15:08:05 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: Failed to connect to driver!)
>>
>>
>>
>>
>>
>> Any help is much appreciated!
>>
>>
>>
>> Regards
>>
>> Manya
>>
>>
>>
>> **************** CAUTION - Disclaimer *****************
>> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
>> for the use of the addressee(s). If you are not the intended recipient, please
>> notify the sender by e-mail and delete the original message. Further, you are not
>> to copy, disclose, or distribute this e-mail or its contents to any other person and
>> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
>> every reasonable precaution to minimize this risk, but is not liable for any damage
>> you may sustain as a result of any virus in this e-mail. You should carry out your
>> own virus checks before opening the e-mail or attachment. Infosys reserves the
>> right to monitor and review the content of all messages sent to or from this e-mail
>> address. Messages sent to or from this e-mail address may be stored on the
>> Infosys e-mail system.
>> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>>
>>
>
Re: Exception while submitting spark job using Yarn
Posted by manya cancerian <ma...@gmail.com>.
hey guys, resolved the issue...there was an entry in /etc/hosts file with
localhost due to which YARN was trying to connect to spark driver on
localhost of client machine.
Once the entry was removed , it picked the hostname and was able to connect.
Thanks Jongyoul Lee, Todd Nist for help...this forum is really great ..
Naveen, I will try to post some simple steps which I followed for
configuring zeppelin with Yarn most probably tomorrow.
Thanks
Manya
On Wed, Aug 5, 2015 at 5:35 PM, Naveenkumar GP <Na...@infosys.com>
wrote:
> No how to do that one..
>
>
>
> *From:* Todd Nist [mailto:tsindotg@gmail.com]
> *Sent:* Wednesday, August 05, 2015 5:34 PM
> *To:* users@zeppelin.incubator.apache.org
> *Subject:* Re: Exception while submitting spark job using Yarn
>
>
>
> Have you built Zeppelin with against the version of Hadoop & Spark you are
> using? It has to be build with the appropriate versions as this will pull
> in the required libraries from Hadoop and Spark. By default Zeppelin will
> not work on Yarn with out doing the build.
>
> @deepujain posted a fairly comprehensive guide on the forum to follow
> with
>
> the steps to take to deploy under Hadoop and Yarn. It was posted
> yesterday,
>
> August 4th.
>
>
>
> HTH.
>
>
>
> -Todd
>
>
>
>
>
> On Wed, Aug 5, 2015 at 2:35 AM, manya cancerian <ma...@gmail.com>
> wrote:
>
> hi Guys,
>
>
>
> I am trying to run Zeppelin using Yarn as resource manager. I have made
> following changes
>
>
>
> 1- I have specified master as 'yarn-client' in the interpreter settings
> using UI
>
> 2. I have specified HADOOP_CONF_DIR as conf directory containing hadoop
> configuration files
>
>
>
> In my scenario I have three machines.
>
> a- Client Machine where zeppelin is installed
>
> b- Machine where YARN resource manager is running along with hadoop
> cluster namenode and datanode
>
> c- Machine running data node
>
>
>
>
>
> When I submit job from my client machine , it gets submitted to yarn but
> fails with following exception -
>
>
>
>
>
> 5/08/04 15:08:05 ERROR yarn.ApplicationMaster: Uncaught exception:
>
> org.apache.spark.SparkException: Failed to connect to driver!
>
> at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:424)
>
> at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:284)
>
> at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:146)
>
> at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:575)
>
> at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
>
> at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
>
> at java.security.AccessController.doPrivileged(Native Method)
>
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>
> at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
>
> at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:573)
>
> at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:596)
>
> at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
>
> 15/08/04 15:08:05 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: Failed to connect to driver!)
>
>
>
>
>
> Any help is much appreciated!
>
>
>
> Regards
>
> Manya
>
>
>
> **************** CAUTION - Disclaimer *****************
> This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
> for the use of the addressee(s). If you are not the intended recipient, please
> notify the sender by e-mail and delete the original message. Further, you are not
> to copy, disclose, or distribute this e-mail or its contents to any other person and
> any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
> every reasonable precaution to minimize this risk, but is not liable for any damage
> you may sustain as a result of any virus in this e-mail. You should carry out your
> own virus checks before opening the e-mail or attachment. Infosys reserves the
> right to monitor and review the content of all messages sent to or from this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS******** End of Disclaimer ********INFOSYS***
>
>
RE: Exception while submitting spark job using Yarn
Posted by Naveenkumar GP <Na...@infosys.com>.
No how to do that one..
From: Todd Nist [mailto:tsindotg@gmail.com]
Sent: Wednesday, August 05, 2015 5:34 PM
To: users@zeppelin.incubator.apache.org
Subject: Re: Exception while submitting spark job using Yarn
Have you built Zeppelin with against the version of Hadoop & Spark you are using? It has to be build with the appropriate versions as this will pull in the required libraries from Hadoop and Spark. By default Zeppelin will not work on Yarn with out doing the build.
@deepujain posted a fairly comprehensive guide on the forum to follow with
the steps to take to deploy under Hadoop and Yarn. It was posted yesterday,
August 4th.
HTH.
-Todd
On Wed, Aug 5, 2015 at 2:35 AM, manya cancerian <ma...@gmail.com>> wrote:
hi Guys,
I am trying to run Zeppelin using Yarn as resource manager. I have made following changes
1- I have specified master as 'yarn-client' in the interpreter settings using UI
2. I have specified HADOOP_CONF_DIR as conf directory containing hadoop configuration files
In my scenario I have three machines.
a- Client Machine where zeppelin is installed
b- Machine where YARN resource manager is running along with hadoop cluster namenode and datanode
c- Machine running data node
When I submit job from my client machine , it gets submitted to yarn but fails with following exception -
5/08/04 15:08:05 ERROR yarn.ApplicationMaster: Uncaught exception:
org.apache.spark.SparkException: Failed to connect to driver!
at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:424)
at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:284)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:146)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:575)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:573)
at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:596)
at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
15/08/04 15:08:05 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: Failed to connect to driver!)
Any help is much appreciated!
Regards
Manya
**************** CAUTION - Disclaimer *****************
This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely
for the use of the addressee(s). If you are not the intended recipient, please
notify the sender by e-mail and delete the original message. Further, you are not
to copy, disclose, or distribute this e-mail or its contents to any other person and
any such actions are unlawful. This e-mail may contain viruses. Infosys has taken
every reasonable precaution to minimize this risk, but is not liable for any damage
you may sustain as a result of any virus in this e-mail. You should carry out your
own virus checks before opening the e-mail or attachment. Infosys reserves the
right to monitor and review the content of all messages sent to or from this e-mail
address. Messages sent to or from this e-mail address may be stored on the
Infosys e-mail system.
***INFOSYS******** End of Disclaimer ********INFOSYS***
Re: Exception while submitting spark job using Yarn
Posted by Todd Nist <ts...@gmail.com>.
Have you built Zeppelin with against the version of Hadoop & Spark you are
using? It has to be build with the appropriate versions as this will pull
in the required libraries from Hadoop and Spark. By default Zeppelin will
not work on Yarn with out doing the build.
@deepujain posted a fairly comprehensive guide on the forum to follow with
the steps to take to deploy under Hadoop and Yarn. It was posted yesterday,
August 4th.
HTH.
-Todd
On Wed, Aug 5, 2015 at 2:35 AM, manya cancerian <ma...@gmail.com>
wrote:
> hi Guys,
>
> I am trying to run Zeppelin using Yarn as resource manager. I have made
> following changes
>
> 1- I have specified master as 'yarn-client' in the interpreter settings
> using UI
> 2. I have specified HADOOP_CONF_DIR as conf directory containing hadoop
> configuration files
>
> In my scenario I have three machines.
> a- Client Machine where zeppelin is installed
> b- Machine where YARN resource manager is running along with hadoop
> cluster namenode and datanode
> c- Machine running data node
>
>
> When I submit job from my client machine , it gets submitted to yarn but
> fails with following exception -
>
>
> 5/08/04 15:08:05 ERROR yarn.ApplicationMaster: Uncaught exception:
> org.apache.spark.SparkException: Failed to connect to driver!
> at org.apache.spark.deploy.yarn.ApplicationMaster.waitForSparkDriver(ApplicationMaster.scala:424)
> at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:284)
> at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:146)
> at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:575)
> at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:60)
> at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:59)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:59)
> at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:573)
> at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:596)
> at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
> 15/08/04 15:08:05 INFO yarn.ApplicationMaster: Final app status: FAILED, exitCode: 10, (reason: Uncaught exception: Failed to connect to driver!)
>
>
>
> Any help is much appreciated!
>
>
> Regards
>
> Manya
>
>