You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by sagar nikam <sa...@gmail.com> on 2012/10/27 11:44:12 UTC

Re: FAILED: Hive Internal Error

Respected Sir/Madam,

>
>
>   I have installed Hadoop on my ubuntu 12.04 system
> i installed HIVE also.for some days it works fine..but one day i directly
> shutdown my machine (with closing hive & hadoop)
> now i am running some query..it throws error (query like "show
> databases","use database_name" works fine) but for below
>
> hive> select count(*) from cidade;
> Error thrown:-
>
> FAILED: Hive Internal Error:
> java.lang.RuntimeException(java.net.ConnectException: Call to localhost/
> 127.0.0.1:54310 failed on connection exception:
> java.net.ConnectException: Connection refused)
> java.lang.RuntimeException: java.net.ConnectException: Call to localhost/
> 127.0.0.1:54310 failed on connection exception:
> java.net.ConnectException: Connection refused
> at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:151)
>  at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:190)
> at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:247)
>  at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:900)
> at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6594)
>  at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340)
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:54310failed on connection exception: java.net.ConnectException: Connection
> refused
>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
> at org.apache.hadoop.ipc.Client.call(Client.java:743)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at $Proxy4.getProtocolVersion(Unknown Source)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>  at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
>  at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:145)
> ... 15 more
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>  at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>  at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>  at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
> at org.apache.hadoop.ipc.Client.call(Client.java:720)
>  ... 28 more
>
> =========================================================================================================================
>
> is it that ,some files may damage during  shutdown ?
> what could be the error ?
>

Re: FAILED: Hive Internal Error

Posted by sagar nikam <sa...@gmail.com>.
Sir,i did what u said......

shell>:~/Hadoop/hadoop-0.20.2/conf$ netstat -tulpn
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State
    PID/Program name
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN
     -
tcp        0      0 0.0.0.0:50060           0.0.0.0:*               LISTEN
     4328/java
tcp        0      0 0.0.0.0:50030           0.0.0.0:*               LISTEN
     4101/java
tcp        0      0 127.0.0.1:45298         0.0.0.0:*               LISTEN
     4328/java
tcp        0      0 0.0.0.0:48946           0.0.0.0:*               LISTEN
     3784/java
tcp        0      0 0.0.0.0:54771           0.0.0.0:*               LISTEN
     4027/java
tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN
     -
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN
     -
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN
     -
tcp        0      0 0.0.0.0:51194           0.0.0.0:*               LISTEN
     4101/java
tcp        0      0 0.0.0.0:8006            0.0.0.0:*               LISTEN
     -
tcp        0      0 127.0.0.1:54311         0.0.0.0:*               LISTEN
     4101/java
tcp        0      0 0.0.0.0:8007            0.0.0.0:*               LISTEN
     -
tcp6       0      0 :::22                   :::*                    LISTEN
     -
tcp6       0      0 ::1:631                 :::*                    LISTEN
     -
udp        0      0 0.0.0.0:52059           0.0.0.0:*
    -
udp        0      0 127.0.0.1:53            0.0.0.0:*
    -
udp        0      0 0.0.0.0:68              0.0.0.0:*
    -
udp        0      0 0.0.0.0:5353            0.0.0.0:*
    -
udp6       0      0 :::50206                :::*
     -
udp6       0      0 :::5353                 :::*
     -
=========================================================================================

if i am doing

shell>lsof -i tcp:54310  or
shell>netstat | grep 54310

nothing is shown-   means no one is using 54310 port


should i go for changing 54310 to 9000 in core-site.xml



Regards
Sagar Nikam
Bangalore
India

Re: FAILED: Hive Internal Error

Posted by sagar nikam <sa...@gmail.com>.
sir,

should i go for changing port from 54310 to 9000 in core-site.xml

Re: FAILED: Hive Internal Error

Posted by shashwat shriparv <dw...@gmail.com>.
It just means the port on which you are trying to connect there is no
hadoop service running on that port please recheck what port you are using
for hadoop and check where you have specified the port no 54310 and check
for port using netstat -nl | grep 54310  if giving any result else just
check on which port your hadoop is running

On Mon, Oct 29, 2012 at 5:45 PM, sagar nikam <sa...@gmail.com>wrote:

> i started dfs services as
>
> ========================================================================================
> trendwise@Trendwise:~/Hadoop/hadoop-0.20.2/bin$ ./start-all.sh
> starting namenode, logging to
> /home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-namenode-Trendwise.out
> trendwise@localhost's password:
> localhost: starting datanode, logging to
> /home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-datanode-Trendwise.out
> trendwise@localhost's password:
> localhost: starting secondarynamenode, logging to
> /home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-secondarynamenode-Trendwise.out
> starting jobtracker, logging to
> /home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-jobtracker-Trendwise.out
> trendwise@localhost's password:
> localhost: starting tasktracker, logging to
> /home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-tasktracker-Trendwise.out
>
> after that run jps--->
> trendwise@Trendwise:~/Hadoop/hadoop-0.20.2/bin$ jps
> 10028 JobTracker
> 3812 RunJar
> 10297 Jps
> 9962 SecondaryNameNode
> 9722 DataNode
> 10252 TaskTracker
> trendwise@Trendwise:~/Hadoop/hadoop-0.20.2/bin$
>
> ======================================================================================
> evrything is perfect
>
> but same error in hive
>
> =======================================================================================
> trendwise@Trendwise:~/Hadoop/hive-0.7.1/bin$ ./hive
> Hive history
> file=/tmp/trendwise/hive_job_log_trendwise_201210291740_1554227890.txt
> hive> show databases;
> OK
> default
> mm
> mm2
> xyz
> Time taken: 10.574 seconds
> hive> use mm2;
> OK
> Time taken: 0.046 seconds
> hive> show tables;
> OK
> cidade
> concessionaria
> familia
> modelo
> venda
> Time taken: 0.562 seconds
> hive> select * from modelo;
> FAILED: Hive Internal Error:
> java.lang.RuntimeException(java.net.ConnectException: Call to localhost/
> 127.0.0.1:54310 failed on connection exception:
> java.net.ConnectException: Connection refused)
> java.lang.RuntimeException: java.net.ConnectException: Call to localhost/
> 127.0.0.1:54310 failed on connection exception:
> java.net.ConnectException: Connection refused
> at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:151)
>  at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:190)
> at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:247)
>  at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:900)
> at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6594)
>  at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340)
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
>  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:54310failed on connection exception: java.net.ConnectException: Connection
> refused
>  at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
> at org.apache.hadoop.ipc.Client.call(Client.java:743)
>  at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at $Proxy4.getProtocolVersion(Unknown Source)
>  at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
>  at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
>  at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
>  at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
>  at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
>  at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:145)
> ... 15 more
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>  at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
>  at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
> at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
>  at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
> at org.apache.hadoop.ipc.Client.call(Client.java:720)
>  ... 28 more
>
> ============================================================================================
>
>
> i have attached log file,which says that
>
> 2012-10-29 17:29:11,580 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
> 2012-10-29 17:29:12,581 INFO org.apache.hadoop.ipc.Client: Retrying
> connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
> ...continued.....................................................
>
> which port should i give to hive ?
>



-- 


∞
Shashwat Shriparv

Re: FAILED: Hive Internal Error

Posted by sagar nikam <sa...@gmail.com>.
i started dfs services as
========================================================================================
trendwise@Trendwise:~/Hadoop/hadoop-0.20.2/bin$ ./start-all.sh
starting namenode, logging to
/home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-namenode-Trendwise.out
trendwise@localhost's password:
localhost: starting datanode, logging to
/home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-datanode-Trendwise.out
trendwise@localhost's password:
localhost: starting secondarynamenode, logging to
/home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-secondarynamenode-Trendwise.out
starting jobtracker, logging to
/home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-jobtracker-Trendwise.out
trendwise@localhost's password:
localhost: starting tasktracker, logging to
/home/trendwise/Hadoop/hadoop-0.20.2/bin/../logs/hadoop-trendwise-tasktracker-Trendwise.out

after that run jps--->
trendwise@Trendwise:~/Hadoop/hadoop-0.20.2/bin$ jps
10028 JobTracker
3812 RunJar
10297 Jps
9962 SecondaryNameNode
9722 DataNode
10252 TaskTracker
trendwise@Trendwise:~/Hadoop/hadoop-0.20.2/bin$
======================================================================================
evrything is perfect

but same error in hive
=======================================================================================
trendwise@Trendwise:~/Hadoop/hive-0.7.1/bin$ ./hive
Hive history
file=/tmp/trendwise/hive_job_log_trendwise_201210291740_1554227890.txt
hive> show databases;
OK
default
mm
mm2
xyz
Time taken: 10.574 seconds
hive> use mm2;
OK
Time taken: 0.046 seconds
hive> show tables;
OK
cidade
concessionaria
familia
modelo
venda
Time taken: 0.562 seconds
hive> select * from modelo;
FAILED: Hive Internal Error:
java.lang.RuntimeException(java.net.ConnectException: Call to localhost/
127.0.0.1:54310 failed on connection exception: java.net.ConnectException:
Connection refused)
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/
127.0.0.1:54310 failed on connection exception: java.net.ConnectException:
Connection refused
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:151)
at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:190)
at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:247)
at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:900)
at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6594)
at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Call to
localhost/127.0.0.1:54310failed on connection exception:
java.net.ConnectException: Connection
refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:145)
... 15 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 28 more
============================================================================================


i have attached log file,which says that

2012-10-29 17:29:11,580 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 0 time(s).
2012-10-29 17:29:12,581 INFO org.apache.hadoop.ipc.Client: Retrying connect
to server: localhost/127.0.0.1:54310. Already tried 1 time(s).
...continued.....................................................

which port should i give to hive ?

RE: FAILED: Hive Internal Error

Posted by yo...@wipro.com.
Hi Sagar,

Ajit is correct.

start you services first by using command.

start-all.sh

Regards
Yogesh Kumar Dhari
________________________________
From: Ajit Kumar Shreevastava [Ajit.Shreevastava@hcl.com]
Sent: Monday, October 29, 2012 11:42 AM
To: user@hive.apache.org
Subject: RE: FAILED: Hive Internal Error

Hi sagar,
Firstly you should start the dfs and mapreduce service.
This error gives when these service are not started.

Regards,
Ajit

From: sagar nikam [mailto:sagarnikam123@gmail.com]
Sent: Sunday, October 28, 2012 12:58 PM
To: user@hive.apache.org; bejoy_ks@yahoo.com
Subject: Re: FAILED: Hive Internal Error

i tried but not

shell>:~/Hadoop/hadoop-0.20.2$ bin/hadoop dfs namenode -format
12/10/28 12:56:42 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 0 time(s).
12/10/28 12:56:43 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 1 time(s).
12/10/28 12:56:44 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 2 time(s).
12/10/28 12:56:45 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 3 time(s).
12/10/28 12:56:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 4 time(s).
12/10/28 12:56:47 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 5 time(s).
12/10/28 12:56:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 6 time(s).
12/10/28 12:56:49 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 7 time(s).
12/10/28 12:56:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 8 time(s).
12/10/28 12:56:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 9 time(s).
Bad connection to FS. command aborted.



::DISCLAIMER::
----------------------------------------------------------------------------------------------------------------------------------------------------
The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and other defects.
----------------------------------------------------------------------------------------------------------------------------------------------------

The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.

www.wipro.com

RE: FAILED: Hive Internal Error

Posted by Ajit Kumar Shreevastava <Aj...@hcl.com>.
Hi sagar,
Firstly you should start the dfs and mapreduce service.
This error gives when these service are not started.

Regards,
Ajit

From: sagar nikam [mailto:sagarnikam123@gmail.com]
Sent: Sunday, October 28, 2012 12:58 PM
To: user@hive.apache.org; bejoy_ks@yahoo.com
Subject: Re: FAILED: Hive Internal Error

i tried but not

shell>:~/Hadoop/hadoop-0.20.2$ bin/hadoop dfs namenode -format
12/10/28 12:56:42 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 0 time(s).
12/10/28 12:56:43 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 1 time(s).
12/10/28 12:56:44 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 2 time(s).
12/10/28 12:56:45 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 3 time(s).
12/10/28 12:56:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 4 time(s).
12/10/28 12:56:47 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 5 time(s).
12/10/28 12:56:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 6 time(s).
12/10/28 12:56:49 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 7 time(s).
12/10/28 12:56:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 8 time(s).
12/10/28 12:56:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000<http://127.0.0.1:9000>. Already tried 9 time(s).
Bad connection to FS. command aborted.



::DISCLAIMER::
----------------------------------------------------------------------------------------------------------------------------------------------------

The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and other defects.

----------------------------------------------------------------------------------------------------------------------------------------------------

Re: FAILED: Hive Internal Error

Posted by sagar nikam <sa...@gmail.com>.
i tried but not

shell>:~/Hadoop/hadoop-0.20.2$ bin/hadoop dfs namenode -format
12/10/28 12:56:42 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 0 time(s).
12/10/28 12:56:43 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 1 time(s).
12/10/28 12:56:44 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 2 time(s).
12/10/28 12:56:45 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 3 time(s).
12/10/28 12:56:46 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 4 time(s).
12/10/28 12:56:47 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 5 time(s).
12/10/28 12:56:48 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 6 time(s).
12/10/28 12:56:49 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 7 time(s).
12/10/28 12:56:50 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 8 time(s).
12/10/28 12:56:51 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:9000. Already tried 9 time(s).
Bad connection to FS. command aborted.

Re: FAILED: Hive Internal Error

Posted by Bejoy KS <be...@yahoo.com>.
Hi Sagar

Your JT web UI should work fine as JobTracker daemon is up and running but the dfs web UI http://localhost:50070 will not work as NameNode is not up and running.

The issue here is your Name Node is down because of some reason.
Can you check the status of port 54310 using netstat , is any other process running on the same. Or are you able to start your Name node if you change your port to say 9000 in core-site.xml?

Regards
Bejoy KS

Sent from handheld, please excuse typos.

-----Original Message-----
From: sagar nikam <sa...@gmail.com>
Date: Sat, 27 Oct 2012 18:04:25 
To: <us...@hive.apache.org>
Reply-To: user@hive.apache.org
Subject: Re: FAILED: Hive Internal Error

Respected sir,

i don't know where is namenode ? but my JT - job tracker web interface. is
running well through address http://localhost:50030/jobtracker.jsp in
browser & showing

localhost Hadoop Map/Reduce Administration *State:* INITIALIZING
*Started:* Sat Oct 27 17:41:34 IST 2012
*Version:* 0.20.2, r911707
*Compiled:* Fri Feb 19 08:07:34 UTC 2010 by chrisdo
*Identifier:* 201210271741




I tried to format name node by below command but showing error

shell>:~/Hadoop/hadoop-0.20.2/bin$ ./hadoop dfs namenode -format
12/10/27 17:45:06 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 0 time(s).
12/10/27 17:45:07 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 1 time(s).
12/10/27 17:45:08 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 2 time(s).
12/10/27 17:45:09 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 3 time(s).
12/10/27 17:45:10 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 4 time(s).
12/10/27 17:45:11 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 5 time(s).
12/10/27 17:45:12 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 6 time(s).
12/10/27 17:45:13 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 7 time(s).
12/10/27 17:45:14 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 8 time(s).
12/10/27 17:45:15 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 9 time(s).
Bad connection to FS. command aborted.




Regards
Sagar Nikam
Pharmacist-Bioinformatician-Software Engineer
Bangalore
India


Re: FAILED: Hive Internal Error

Posted by sagar nikam <sa...@gmail.com>.
Respected sir,

i don't know where is namenode ? but my JT - job tracker web interface. is
running well through address http://localhost:50030/jobtracker.jsp in
browser & showing

localhost Hadoop Map/Reduce Administration *State:* INITIALIZING
*Started:* Sat Oct 27 17:41:34 IST 2012
*Version:* 0.20.2, r911707
*Compiled:* Fri Feb 19 08:07:34 UTC 2010 by chrisdo
*Identifier:* 201210271741




I tried to format name node by below command but showing error

shell>:~/Hadoop/hadoop-0.20.2/bin$ ./hadoop dfs namenode -format
12/10/27 17:45:06 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 0 time(s).
12/10/27 17:45:07 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 1 time(s).
12/10/27 17:45:08 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 2 time(s).
12/10/27 17:45:09 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 3 time(s).
12/10/27 17:45:10 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 4 time(s).
12/10/27 17:45:11 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 5 time(s).
12/10/27 17:45:12 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 6 time(s).
12/10/27 17:45:13 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 7 time(s).
12/10/27 17:45:14 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 8 time(s).
12/10/27 17:45:15 INFO ipc.Client: Retrying connect to server: localhost/
127.0.0.1:54310. Already tried 9 time(s).
Bad connection to FS. command aborted.




Regards
Sagar Nikam
Pharmacist-Bioinformatician-Software Engineer
Bangalore
India

RE: FAILED: Hive Internal Error

Posted by as...@wipro.com.
Where is namenode ?
JT - job tracker web interface.

If you bont have much data in hadoop and you can afford to loose the data already in hadoop then try 

> bin/hadoop dfs namenode -format (Apache hadoop)

then start your hadoop deamons again.



Thanks,
Ashok
________________________________________
From: sagar nikam [sagarnikam123@gmail.com]
Sent: Saturday, October 27, 2012 4:03 PM
To: user@hive.apache.org
Subject: Re: FAILED: Hive Internal Error

yes sir,my jps is correctly working..i put jps on terminal & its showig

shell $> jps
3630 TaskTracker
3403 JobTracker
3086 DataNode
3678 Jps
3329 SecondaryNameNode


but what is  "JT web UI is up or not" ? JT is don't know ?

The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.

www.wipro.com

Re: FAILED: Hive Internal Error

Posted by sagar nikam <sa...@gmail.com>.
yes sir,my jps is correctly working..i put jps on terminal & its showig

shell $> jps
3630 TaskTracker
3403 JobTracker
3086 DataNode
3678 Jps
3329 SecondaryNameNode


but what is  "JT web UI is up or not" ? JT is don't know ?

RE: FAILED: Hive Internal Error

Posted by as...@wipro.com.
Can you do a JPS and check all the deamons running fine?
If yes check your JT web UI is up or not.

Thanks,
Ashok
________________________________________
From: sagar nikam [sagarnikam123@gmail.com]
Sent: Saturday, October 27, 2012 3:14 PM
To: user@hive.apache.org; dev@hive.apache.org
Subject: Re: FAILED: Hive Internal Error

Respected Sir/Madam,


  I have installed Hadoop on my ubuntu 12.04 system
i installed HIVE also.for some days it works fine..but one day i directly shutdown my machine (with closing hive & hadoop)
now i am running some query..it throws error (query like "show databases","use database_name" works fine) but for below

hive> select count(*) from cidade;
Error thrown:-

FAILED: Hive Internal Error: java.lang.RuntimeException(java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310> failed on connection exception: java.net.ConnectException: Connection refused)
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310> failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:151)
at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:190)
at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:247)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:900)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6594)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310> failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:145)
... 15 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 28 more
=========================================================================================================================

is it that ,some files may damage during  shutdown ?
what could be the error ?


The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.

www.wipro.com

Re: FAILED: Hive Internal Error

Posted by Steve Loughran <st...@hortonworks.com>.
On 27 October 2012 10:44, sagar nikam <sa...@gmail.com> wrote:

> Respected Sir/Madam,
>
>>
>>
this is a hive question. Please don't cross post to general@hadoop or
user@hadoop.

thanks

RE: FAILED: Hive Internal Error

Posted by as...@wipro.com.
Can you do a JPS and check all the deamons running fine?
If yes check your JT web UI is up or not.

Thanks,
Ashok
________________________________________
From: sagar nikam [sagarnikam123@gmail.com]
Sent: Saturday, October 27, 2012 3:14 PM
To: user@hive.apache.org; dev@hive.apache.org
Subject: Re: FAILED: Hive Internal Error

Respected Sir/Madam,


  I have installed Hadoop on my ubuntu 12.04 system
i installed HIVE also.for some days it works fine..but one day i directly shutdown my machine (with closing hive & hadoop)
now i am running some query..it throws error (query like "show databases","use database_name" works fine) but for below

hive> select count(*) from cidade;
Error thrown:-

FAILED: Hive Internal Error: java.lang.RuntimeException(java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310> failed on connection exception: java.net.ConnectException: Connection refused)
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310> failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:151)
at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:190)
at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:247)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:900)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6594)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:54310<http://127.0.0.1:54310> failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:767)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:145)
... 15 more
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:404)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:304)
at org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:176)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:860)
at org.apache.hadoop.ipc.Client.call(Client.java:720)
... 28 more
=========================================================================================================================

is it that ,some files may damage during  shutdown ?
what could be the error ?


The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. 

WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email.

www.wipro.com

Re: FAILED: Hive Internal Error

Posted by Steve Loughran <st...@hortonworks.com>.
On 27 October 2012 10:44, sagar nikam <sa...@gmail.com> wrote:

> Respected Sir/Madam,
>
>>
>>
this is a hive question. Please don't cross post to general@hadoop or
user@hadoop.

thanks

Re: FAILED: Hive Internal Error

Posted by Steve Loughran <st...@hortonworks.com>.
On 27 October 2012 10:44, sagar nikam <sa...@gmail.com> wrote:

> Respected Sir/Madam,
>
>>
>>
this is a hive question. Please don't cross post to general@hadoop or
user@hadoop.

thanks

Re: FAILED: Hive Internal Error

Posted by Steve Loughran <st...@hortonworks.com>.
On 27 October 2012 10:44, sagar nikam <sa...@gmail.com> wrote:

> Respected Sir/Madam,
>
>>
>>
this is a hive question. Please don't cross post to general@hadoop or
user@hadoop.

thanks