You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Anand Murali <an...@yahoo.com> on 2015/05/15 08:52:53 UTC

Unable to start Hive

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Hi Anand,

Your namenode is in safe mode. Could you please share the namenode logs.

On Fri, May 15, 2015 at 12:22 PM, Anand Murali <an...@yahoo.com>
wrote:

> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
gr8 Anand, Now, please run hive command and send me hive and namenode log.

On Fri, May 15, 2015 at 4:07 PM, Anand Murali <an...@yahoo.com> wrote:

> Find below
>
> anand_vihar@Latitude-E5540:~$ hdfs dfs -mkdir /tmp/abc
> anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/abc
> anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/
> Found 2 items
> drwxr-xr-x   - anand_vihar supergroup          0 2015-05-15 16:05 /tmp/abc
> drwx-wx-wx   - anand_vihar supergroup          0 2015-05-14 12:03 /tmp/hive
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 4:05 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> Give me o/p of below command
>
> #hadoop fs -mkdir /tmp/abc
>
> #hadoop fs -ls /tmp/abc
>
> On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Find below
>
> anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
> Error: Could not find or load main class dfsamin
> anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> Configured Capacity: 179431981056 (167.11 GB)
> Present Capacity: 142666625024 (132.87 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used: 946176 (924 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Live datanodes (1):
>
> Name: 127.0.0.1:50010 (localhost)
> Hostname: Latitude-E5540
> Decommission Status : Normal
> Configured Capacity: 179431981056 (167.11 GB)
> DFS Used: 946176 (924 KB)
> Non DFS Used: 36765356032 (34.24 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 79.51%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Fri May 15 15:07:53 IST 2015
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> please send me o/p of below command
>
> # hadoop dfsadmin -report
>
>
> On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
gr8 Anand, Now, please run hive command and send me hive and namenode log.

On Fri, May 15, 2015 at 4:07 PM, Anand Murali <an...@yahoo.com> wrote:

> Find below
>
> anand_vihar@Latitude-E5540:~$ hdfs dfs -mkdir /tmp/abc
> anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/abc
> anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/
> Found 2 items
> drwxr-xr-x   - anand_vihar supergroup          0 2015-05-15 16:05 /tmp/abc
> drwx-wx-wx   - anand_vihar supergroup          0 2015-05-14 12:03 /tmp/hive
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 4:05 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> Give me o/p of below command
>
> #hadoop fs -mkdir /tmp/abc
>
> #hadoop fs -ls /tmp/abc
>
> On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Find below
>
> anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
> Error: Could not find or load main class dfsamin
> anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> Configured Capacity: 179431981056 (167.11 GB)
> Present Capacity: 142666625024 (132.87 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used: 946176 (924 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Live datanodes (1):
>
> Name: 127.0.0.1:50010 (localhost)
> Hostname: Latitude-E5540
> Decommission Status : Normal
> Configured Capacity: 179431981056 (167.11 GB)
> DFS Used: 946176 (924 KB)
> Non DFS Used: 36765356032 (34.24 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 79.51%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Fri May 15 15:07:53 IST 2015
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> please send me o/p of below command
>
> # hadoop dfsadmin -report
>
>
> On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
gr8 Anand, Now, please run hive command and send me hive and namenode log.

On Fri, May 15, 2015 at 4:07 PM, Anand Murali <an...@yahoo.com> wrote:

> Find below
>
> anand_vihar@Latitude-E5540:~$ hdfs dfs -mkdir /tmp/abc
> anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/abc
> anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/
> Found 2 items
> drwxr-xr-x   - anand_vihar supergroup          0 2015-05-15 16:05 /tmp/abc
> drwx-wx-wx   - anand_vihar supergroup          0 2015-05-14 12:03 /tmp/hive
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 4:05 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> Give me o/p of below command
>
> #hadoop fs -mkdir /tmp/abc
>
> #hadoop fs -ls /tmp/abc
>
> On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Find below
>
> anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
> Error: Could not find or load main class dfsamin
> anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> Configured Capacity: 179431981056 (167.11 GB)
> Present Capacity: 142666625024 (132.87 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used: 946176 (924 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Live datanodes (1):
>
> Name: 127.0.0.1:50010 (localhost)
> Hostname: Latitude-E5540
> Decommission Status : Normal
> Configured Capacity: 179431981056 (167.11 GB)
> DFS Used: 946176 (924 KB)
> Non DFS Used: 36765356032 (34.24 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 79.51%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Fri May 15 15:07:53 IST 2015
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> please send me o/p of below command
>
> # hadoop dfsadmin -report
>
>
> On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
gr8 Anand, Now, please run hive command and send me hive and namenode log.

On Fri, May 15, 2015 at 4:07 PM, Anand Murali <an...@yahoo.com> wrote:

> Find below
>
> anand_vihar@Latitude-E5540:~$ hdfs dfs -mkdir /tmp/abc
> anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/abc
> anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/
> Found 2 items
> drwxr-xr-x   - anand_vihar supergroup          0 2015-05-15 16:05 /tmp/abc
> drwx-wx-wx   - anand_vihar supergroup          0 2015-05-14 12:03 /tmp/hive
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 4:05 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> Give me o/p of below command
>
> #hadoop fs -mkdir /tmp/abc
>
> #hadoop fs -ls /tmp/abc
>
> On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Find below
>
> anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
> Error: Could not find or load main class dfsamin
> anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> Configured Capacity: 179431981056 (167.11 GB)
> Present Capacity: 142666625024 (132.87 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used: 946176 (924 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Live datanodes (1):
>
> Name: 127.0.0.1:50010 (localhost)
> Hostname: Latitude-E5540
> Decommission Status : Normal
> Configured Capacity: 179431981056 (167.11 GB)
> DFS Used: 946176 (924 KB)
> Non DFS Used: 36765356032 (34.24 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 79.51%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Fri May 15 15:07:53 IST 2015
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> please send me o/p of below command
>
> # hadoop dfsadmin -report
>
>
> On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>
>
>
>

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Find below
anand_vihar@Latitude-E5540:~$ hdfs dfs -mkdir /tmp/abc
anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/abc
anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/Found 2 items
drwxr-xr-x   - anand_vihar supergroup          0 2015-05-15 16:05 /tmp/abc
drwx-wx-wx   - anand_vihar supergroup          0 2015-05-14 12:03 /tmp/hive

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 4:05 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 Give me o/p of below command
#hadoop fs -mkdir /tmp/abc
#hadoop fs -ls /tmp/abc
On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Find below
anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
Error: Could not find or load main class dfsamin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 179431981056 (167.11 GB)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission Status : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used: 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri May 15 15:07:53 IST 2015

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 please send me o/p of below command
# hadoop dfsadmin -report

On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas
Can you be more specific. What to check for in Hive logs.
Thanks
Regards
Anand

Sent from my iPhone
On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:


Hi Anand, 
It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   






   



  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Find below
anand_vihar@Latitude-E5540:~$ hdfs dfs -mkdir /tmp/abc
anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/abc
anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/Found 2 items
drwxr-xr-x   - anand_vihar supergroup          0 2015-05-15 16:05 /tmp/abc
drwx-wx-wx   - anand_vihar supergroup          0 2015-05-14 12:03 /tmp/hive

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 4:05 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 Give me o/p of below command
#hadoop fs -mkdir /tmp/abc
#hadoop fs -ls /tmp/abc
On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Find below
anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
Error: Could not find or load main class dfsamin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 179431981056 (167.11 GB)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission Status : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used: 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri May 15 15:07:53 IST 2015

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 please send me o/p of below command
# hadoop dfsadmin -report

On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas
Can you be more specific. What to check for in Hive logs.
Thanks
Regards
Anand

Sent from my iPhone
On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:


Hi Anand, 
It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   






   



  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Find below
anand_vihar@Latitude-E5540:~$ hdfs dfs -mkdir /tmp/abc
anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/abc
anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/Found 2 items
drwxr-xr-x   - anand_vihar supergroup          0 2015-05-15 16:05 /tmp/abc
drwx-wx-wx   - anand_vihar supergroup          0 2015-05-14 12:03 /tmp/hive

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 4:05 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 Give me o/p of below command
#hadoop fs -mkdir /tmp/abc
#hadoop fs -ls /tmp/abc
On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Find below
anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
Error: Could not find or load main class dfsamin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 179431981056 (167.11 GB)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission Status : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used: 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri May 15 15:07:53 IST 2015

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 please send me o/p of below command
# hadoop dfsadmin -report

On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas
Can you be more specific. What to check for in Hive logs.
Thanks
Regards
Anand

Sent from my iPhone
On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:


Hi Anand, 
It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   






   



  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Find below
anand_vihar@Latitude-E5540:~$ hdfs dfs -mkdir /tmp/abc
anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/abc
anand_vihar@Latitude-E5540:~$ hdfs dfs -ls /tmp/Found 2 items
drwxr-xr-x   - anand_vihar supergroup          0 2015-05-15 16:05 /tmp/abc
drwx-wx-wx   - anand_vihar supergroup          0 2015-05-14 12:03 /tmp/hive

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 4:05 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 Give me o/p of below command
#hadoop fs -mkdir /tmp/abc
#hadoop fs -ls /tmp/abc
On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Find below
anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
Error: Could not find or load main class dfsamin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 179431981056 (167.11 GB)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission Status : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used: 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri May 15 15:07:53 IST 2015

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 please send me o/p of below command
# hadoop dfsadmin -report

On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas
Can you be more specific. What to check for in Hive logs.
Thanks
Regards
Anand

Sent from my iPhone
On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:


Hi Anand, 
It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   






   



  

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Give me o/p of below command

#hadoop fs -mkdir /tmp/abc

#hadoop fs -ls /tmp/abc

On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com> wrote:

> Vikas:
>
> Find below
>
> anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
> Error: Could not find or load main class dfsamin
> anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> Configured Capacity: 179431981056 (167.11 GB)
> Present Capacity: 142666625024 (132.87 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used: 946176 (924 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Live datanodes (1):
>
> Name: 127.0.0.1:50010 (localhost)
> Hostname: Latitude-E5540
> Decommission Status : Normal
> Configured Capacity: 179431981056 (167.11 GB)
> DFS Used: 946176 (924 KB)
> Non DFS Used: 36765356032 (34.24 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 79.51%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Fri May 15 15:07:53 IST 2015
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> please send me o/p of below command
>
> # hadoop dfsadmin -report
>
>
> On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Give me o/p of below command

#hadoop fs -mkdir /tmp/abc

#hadoop fs -ls /tmp/abc

On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com> wrote:

> Vikas:
>
> Find below
>
> anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
> Error: Could not find or load main class dfsamin
> anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> Configured Capacity: 179431981056 (167.11 GB)
> Present Capacity: 142666625024 (132.87 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used: 946176 (924 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Live datanodes (1):
>
> Name: 127.0.0.1:50010 (localhost)
> Hostname: Latitude-E5540
> Decommission Status : Normal
> Configured Capacity: 179431981056 (167.11 GB)
> DFS Used: 946176 (924 KB)
> Non DFS Used: 36765356032 (34.24 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 79.51%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Fri May 15 15:07:53 IST 2015
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> please send me o/p of below command
>
> # hadoop dfsadmin -report
>
>
> On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Give me o/p of below command

#hadoop fs -mkdir /tmp/abc

#hadoop fs -ls /tmp/abc

On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com> wrote:

> Vikas:
>
> Find below
>
> anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
> Error: Could not find or load main class dfsamin
> anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> Configured Capacity: 179431981056 (167.11 GB)
> Present Capacity: 142666625024 (132.87 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used: 946176 (924 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Live datanodes (1):
>
> Name: 127.0.0.1:50010 (localhost)
> Hostname: Latitude-E5540
> Decommission Status : Normal
> Configured Capacity: 179431981056 (167.11 GB)
> DFS Used: 946176 (924 KB)
> Non DFS Used: 36765356032 (34.24 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 79.51%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Fri May 15 15:07:53 IST 2015
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> please send me o/p of below command
>
> # hadoop dfsadmin -report
>
>
> On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Give me o/p of below command

#hadoop fs -mkdir /tmp/abc

#hadoop fs -ls /tmp/abc

On Fri, May 15, 2015 at 3:08 PM, Anand Murali <an...@yahoo.com> wrote:

> Vikas:
>
> Find below
>
> anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
> Error: Could not find or load main class dfsamin
> anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
> DEPRECATED: Use of this script to execute hdfs command is deprecated.
> Instead use the hdfs command for it.
>
> Configured Capacity: 179431981056 (167.11 GB)
> Present Capacity: 142666625024 (132.87 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used: 946176 (924 KB)
> DFS Used%: 0.00%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
>
> -------------------------------------------------
> Live datanodes (1):
>
> Name: 127.0.0.1:50010 (localhost)
> Hostname: Latitude-E5540
> Decommission Status : Normal
> Configured Capacity: 179431981056 (167.11 GB)
> DFS Used: 946176 (924 KB)
> Non DFS Used: 36765356032 (34.24 GB)
> DFS Remaining: 142665678848 (132.87 GB)
> DFS Used%: 0.00%
> DFS Remaining%: 79.51%
> Configured Cache Capacity: 0 (0 B)
> Cache Used: 0 (0 B)
> Cache Remaining: 0 (0 B)
> Cache Used%: 100.00%
> Cache Remaining%: 0.00%
> Xceivers: 1
> Last contact: Fri May 15 15:07:53 IST 2015
>
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com>
> wrote:
>
>
> please send me o/p of below command
>
> # hadoop dfsadmin -report
>
>
> On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>
>

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas:
Find below
anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
Error: Could not find or load main class dfsamin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 179431981056 (167.11 GB)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission Status : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used: 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri May 15 15:07:53 IST 2015

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 please send me o/p of below command
# hadoop dfsadmin -report

On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas
Can you be more specific. What to check for in Hive logs.
Thanks
Regards
Anand

Sent from my iPhone
On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:


Hi Anand, 
It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   






  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas:
Find below
anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
Error: Could not find or load main class dfsamin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 179431981056 (167.11 GB)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission Status : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used: 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri May 15 15:07:53 IST 2015

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 please send me o/p of below command
# hadoop dfsadmin -report

On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas
Can you be more specific. What to check for in Hive logs.
Thanks
Regards
Anand

Sent from my iPhone
On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:


Hi Anand, 
It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   






  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas:
Find below
anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
Error: Could not find or load main class dfsamin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 179431981056 (167.11 GB)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission Status : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used: 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri May 15 15:07:53 IST 2015

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 please send me o/p of below command
# hadoop dfsadmin -report

On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas
Can you be more specific. What to check for in Hive logs.
Thanks
Regards
Anand

Sent from my iPhone
On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:


Hi Anand, 
It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   






  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas:
Find below
anand_vihar@Latitude-E5540:~$ hadoop dfsamin -report
Error: Could not find or load main class dfsamin
anand_vihar@Latitude-E5540:~$ hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 179431981056 (167.11 GB)
Present Capacity: 142666625024 (132.87 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used: 946176 (924 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (1):

Name: 127.0.0.1:50010 (localhost)
Hostname: Latitude-E5540
Decommission Status : Normal
Configured Capacity: 179431981056 (167.11 GB)
DFS Used: 946176 (924 KB)
Non DFS Used: 36765356032 (34.24 GB)
DFS Remaining: 142665678848 (132.87 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri May 15 15:07:53 IST 2015

 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 2:52 PM, Vikas Parashar <pa...@gmail.com> wrote:
   

 please send me o/p of below command
# hadoop dfsadmin -report

On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas
Can you be more specific. What to check for in Hive logs.
Thanks
Regards
Anand

Sent from my iPhone
On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:


Hi Anand, 
It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:

Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   






  

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
please send me o/p of below command

# hadoop dfsadmin -report


On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
>> Vikas:
>>
>> Please find attached. At this time I would like to tell you that with the
>> current installation, I am able to run mapreduce jobs and pig scripts
>> without any installation errors. So please, any suggestions made should not
>> break and cascade other installations.
>>
>> Thanks
>>
>> Regards,
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>>
>>
>>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
>> kirandkumar2013@gmail.com> wrote:
>>
>>
>> Anand,
>> Sometimes it will error out due some resources are not available. So stop
>> and start the hadoop cluster and see
>> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>>
>> Dear All:
>>
>> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
>> connect Hive to it after installation. I run . .hadoop as start-up script
>> which contain environment variables setting. Find below
>>
>> *. ,hadoop*
>> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
>> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
>> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
>> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
>> export PIG_HOME=/home/anand_vihar/pig-0.14.0
>> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
>> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
>> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
>> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
>> export
>> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
>> echo $HADOOP_HOME
>> echo $JAVA_HOME
>> echo $HADOOP_INSTALL
>> echo $PIG_HOME
>> echo $PIG_INSTALL
>> echo $PIG_CLASSPATH
>> echo $HIVE_HOME
>> echo $PATH
>>
>>
>> *Error*
>>
>> anand_vihar@Latitude-E5540:~$ hive
>>
>> Logging initialized using configuration in
>> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> Exception in thread "main" java.lang.RuntimeException:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>> Cannot create directory
>> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
>> safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
>> The number of live datanodes 1 has reached the minimum number 0. In safe
>> mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>> Cannot create directory
>> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
>> safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
>> The number of live datanodes 1 has reached the minimum number 0. In safe
>> mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>     at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>     at
>> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>     at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>>     ... 8 more
>>
>> Can somebody advise.
>>
>> Thanks
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>>
>>
>>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
please send me o/p of below command

# hadoop dfsadmin -report


On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
>> Vikas:
>>
>> Please find attached. At this time I would like to tell you that with the
>> current installation, I am able to run mapreduce jobs and pig scripts
>> without any installation errors. So please, any suggestions made should not
>> break and cascade other installations.
>>
>> Thanks
>>
>> Regards,
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>>
>>
>>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
>> kirandkumar2013@gmail.com> wrote:
>>
>>
>> Anand,
>> Sometimes it will error out due some resources are not available. So stop
>> and start the hadoop cluster and see
>> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>>
>> Dear All:
>>
>> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
>> connect Hive to it after installation. I run . .hadoop as start-up script
>> which contain environment variables setting. Find below
>>
>> *. ,hadoop*
>> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
>> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
>> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
>> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
>> export PIG_HOME=/home/anand_vihar/pig-0.14.0
>> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
>> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
>> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
>> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
>> export
>> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
>> echo $HADOOP_HOME
>> echo $JAVA_HOME
>> echo $HADOOP_INSTALL
>> echo $PIG_HOME
>> echo $PIG_INSTALL
>> echo $PIG_CLASSPATH
>> echo $HIVE_HOME
>> echo $PATH
>>
>>
>> *Error*
>>
>> anand_vihar@Latitude-E5540:~$ hive
>>
>> Logging initialized using configuration in
>> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> Exception in thread "main" java.lang.RuntimeException:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>> Cannot create directory
>> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
>> safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
>> The number of live datanodes 1 has reached the minimum number 0. In safe
>> mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>> Cannot create directory
>> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
>> safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
>> The number of live datanodes 1 has reached the minimum number 0. In safe
>> mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>     at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>     at
>> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>     at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>>     ... 8 more
>>
>> Can somebody advise.
>>
>> Thanks
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>>
>>
>>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
please send me o/p of below command

# hadoop dfsadmin -report


On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
>> Vikas:
>>
>> Please find attached. At this time I would like to tell you that with the
>> current installation, I am able to run mapreduce jobs and pig scripts
>> without any installation errors. So please, any suggestions made should not
>> break and cascade other installations.
>>
>> Thanks
>>
>> Regards,
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>>
>>
>>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
>> kirandkumar2013@gmail.com> wrote:
>>
>>
>> Anand,
>> Sometimes it will error out due some resources are not available. So stop
>> and start the hadoop cluster and see
>> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>>
>> Dear All:
>>
>> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
>> connect Hive to it after installation. I run . .hadoop as start-up script
>> which contain environment variables setting. Find below
>>
>> *. ,hadoop*
>> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
>> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
>> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
>> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
>> export PIG_HOME=/home/anand_vihar/pig-0.14.0
>> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
>> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
>> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
>> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
>> export
>> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
>> echo $HADOOP_HOME
>> echo $JAVA_HOME
>> echo $HADOOP_INSTALL
>> echo $PIG_HOME
>> echo $PIG_INSTALL
>> echo $PIG_CLASSPATH
>> echo $HIVE_HOME
>> echo $PATH
>>
>>
>> *Error*
>>
>> anand_vihar@Latitude-E5540:~$ hive
>>
>> Logging initialized using configuration in
>> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> Exception in thread "main" java.lang.RuntimeException:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>> Cannot create directory
>> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
>> safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
>> The number of live datanodes 1 has reached the minimum number 0. In safe
>> mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>> Cannot create directory
>> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
>> safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
>> The number of live datanodes 1 has reached the minimum number 0. In safe
>> mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>     at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>     at
>> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>     at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>>     ... 8 more
>>
>> Can somebody advise.
>>
>> Thanks
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>>
>>
>>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
please send me o/p of below command

# hadoop dfsadmin -report


On Fri, May 15, 2015 at 2:43 PM, Anand Murali <an...@yahoo.com> wrote:

> Vikas
>
> Can you be more specific. What to check for in Hive logs.
>
> Thanks
>
> Regards
>
> Anand
>
> Sent from my iPhone
>
> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
>
> Hi Anand,
>
> It seems your namenode is working fine. I can't see any "safemode" related
> logs in your namenode file. Kindly check it hive logs as well.
>
> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
> wrote:
>
>> Vikas:
>>
>> Please find attached. At this time I would like to tell you that with the
>> current installation, I am able to run mapreduce jobs and pig scripts
>> without any installation errors. So please, any suggestions made should not
>> break and cascade other installations.
>>
>> Thanks
>>
>> Regards,
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>>
>>
>>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
>> kirandkumar2013@gmail.com> wrote:
>>
>>
>> Anand,
>> Sometimes it will error out due some resources are not available. So stop
>> and start the hadoop cluster and see
>> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>>
>> Dear All:
>>
>> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
>> connect Hive to it after installation. I run . .hadoop as start-up script
>> which contain environment variables setting. Find below
>>
>> *. ,hadoop*
>> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
>> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
>> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
>> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
>> export PIG_HOME=/home/anand_vihar/pig-0.14.0
>> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
>> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
>> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
>> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
>> export
>> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
>> echo $HADOOP_HOME
>> echo $JAVA_HOME
>> echo $HADOOP_INSTALL
>> echo $PIG_HOME
>> echo $PIG_INSTALL
>> echo $PIG_CLASSPATH
>> echo $HIVE_HOME
>> echo $PATH
>>
>>
>> *Error*
>>
>> anand_vihar@Latitude-E5540:~$ hive
>>
>> Logging initialized using configuration in
>> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in
>> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in
>> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> Exception in thread "main" java.lang.RuntimeException:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>> Cannot create directory
>> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
>> safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
>> The number of live datanodes 1 has reached the minimum number 0. In safe
>> mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by:
>> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
>> Cannot create directory
>> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
>> safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
>> The number of live datanodes 1 has reached the minimum number 0. In safe
>> mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at
>> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>>
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>     at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>>     at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>     at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>     at
>> org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>     at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>     at
>> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>>     at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>>     ... 8 more
>>
>> Can somebody advise.
>>
>> Thanks
>>
>> Anand Murali
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>>
>>
>>
>>
>

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas

Can you be more specific. What to check for in Hive logs.

Thanks

Regards

Anand

Sent from my iPhone

> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
> 
> Hi Anand, 
> 
> It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
> 
>> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:
>> Vikas:
>> 
>> Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
>> 
>> Thanks
>> 
>> Regards,
>>  
>> Anand Murali  
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>> 
>> 
>> 
>> On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
>> 
>> 
>> Anand,
>> Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and see
>> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>> Dear All:
>> 
>> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
>> 
>> . ,hadoop
>> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
>> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
>> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
>> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
>> export PIG_HOME=/home/anand_vihar/pig-0.14.0
>> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
>> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
>> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
>> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
>> export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
>> echo $HADOOP_HOME
>> echo $JAVA_HOME
>> echo $HADOOP_INSTALL
>> echo $PIG_HOME
>> echo $PIG_INSTALL
>> echo $PIG_CLASSPATH
>> echo $HIVE_HOME
>> echo $PATH
>> 
>> 
>> Error
>> 
>> anand_vihar@Latitude-E5540:~$ hive
>> 
>> Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 
>>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>     at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>>     at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>>     ... 8 more
>> 
>> Can somebody advise.
>> 
>> Thanks
>>  
>> Anand Murali  
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
> 

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas

Can you be more specific. What to check for in Hive logs.

Thanks

Regards

Anand

Sent from my iPhone

> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
> 
> Hi Anand, 
> 
> It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
> 
>> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:
>> Vikas:
>> 
>> Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
>> 
>> Thanks
>> 
>> Regards,
>>  
>> Anand Murali  
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>> 
>> 
>> 
>> On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
>> 
>> 
>> Anand,
>> Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and see
>> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>> Dear All:
>> 
>> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
>> 
>> . ,hadoop
>> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
>> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
>> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
>> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
>> export PIG_HOME=/home/anand_vihar/pig-0.14.0
>> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
>> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
>> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
>> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
>> export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
>> echo $HADOOP_HOME
>> echo $JAVA_HOME
>> echo $HADOOP_INSTALL
>> echo $PIG_HOME
>> echo $PIG_INSTALL
>> echo $PIG_CLASSPATH
>> echo $HIVE_HOME
>> echo $PATH
>> 
>> 
>> Error
>> 
>> anand_vihar@Latitude-E5540:~$ hive
>> 
>> Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 
>>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>     at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>>     at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>>     ... 8 more
>> 
>> Can somebody advise.
>> 
>> Thanks
>>  
>> Anand Murali  
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
> 

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas

Can you be more specific. What to check for in Hive logs.

Thanks

Regards

Anand

Sent from my iPhone

> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
> 
> Hi Anand, 
> 
> It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
> 
>> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:
>> Vikas:
>> 
>> Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
>> 
>> Thanks
>> 
>> Regards,
>>  
>> Anand Murali  
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>> 
>> 
>> 
>> On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
>> 
>> 
>> Anand,
>> Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and see
>> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>> Dear All:
>> 
>> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
>> 
>> . ,hadoop
>> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
>> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
>> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
>> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
>> export PIG_HOME=/home/anand_vihar/pig-0.14.0
>> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
>> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
>> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
>> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
>> export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
>> echo $HADOOP_HOME
>> echo $JAVA_HOME
>> echo $HADOOP_INSTALL
>> echo $PIG_HOME
>> echo $PIG_INSTALL
>> echo $PIG_CLASSPATH
>> echo $HIVE_HOME
>> echo $PATH
>> 
>> 
>> Error
>> 
>> anand_vihar@Latitude-E5540:~$ hive
>> 
>> Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 
>>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>     at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>>     at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>>     ... 8 more
>> 
>> Can somebody advise.
>> 
>> Thanks
>>  
>> Anand Murali  
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
> 

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas

Can you be more specific. What to check for in Hive logs.

Thanks

Regards

Anand

Sent from my iPhone

> On 15-May-2015, at 2:41 pm, Vikas Parashar <pa...@gmail.com> wrote:
> 
> Hi Anand, 
> 
> It seems your namenode is working fine. I can't see any "safemode" related logs in your namenode file. Kindly check it hive logs as well.
> 
>> On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com> wrote:
>> Vikas:
>> 
>> Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
>> 
>> Thanks
>> 
>> Regards,
>>  
>> Anand Murali  
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
>> 
>> 
>> 
>> On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
>> 
>> 
>> Anand,
>> Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and see
>> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>> Dear All:
>> 
>> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
>> 
>> . ,hadoop
>> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
>> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
>> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
>> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
>> export PIG_HOME=/home/anand_vihar/pig-0.14.0
>> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
>> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
>> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
>> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
>> export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
>> echo $HADOOP_HOME
>> echo $JAVA_HOME
>> echo $HADOOP_INSTALL
>> echo $PIG_HOME
>> echo $PIG_INSTALL
>> echo $PIG_CLASSPATH
>> echo $HIVE_HOME
>> echo $PATH
>> 
>> 
>> Error
>> 
>> anand_vihar@Latitude-E5540:~$ hive
>> 
>> Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
>> SLF4J: Class path contains multiple SLF4J bindings.
>> SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
>> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>> Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 
>>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
>> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>>     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>>     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>>     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>>     at java.security.AccessController.doPrivileged(Native Method)
>>     at javax.security.auth.Subject.doAs(Subject.java:415)
>>     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>> 
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>>     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>     at java.lang.reflect.Method.invoke(Method.java:606)
>>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>>     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>>     at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>>     at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>>     at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>>     ... 8 more
>> 
>> Can somebody advise.
>> 
>> Thanks
>>  
>> Anand Murali  
>> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
>> Chennai - 600 004, India
>> Ph: (044)- 28474593/ 43526162 (voicemail)
> 

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Hi Anand,

It seems your namenode is working fine. I can't see any "safemode" related
logs in your namenode file. Kindly check it hive logs as well.

On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
wrote:

> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Hi Anand,

It seems your namenode is working fine. I can't see any "safemode" related
logs in your namenode file. Kindly check it hive logs as well.

On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
wrote:

> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Hi Anand,

It seems your namenode is working fine. I can't see any "safemode" related
logs in your namenode file. Kindly check it hive logs as well.

On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
wrote:

> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Hi Anand,

It seems your namenode is working fine. I can't see any "safemode" related
logs in your namenode file. Kindly check it hive logs as well.

On Fri, May 15, 2015 at 12:40 PM, Anand Murali <an...@yahoo.com>
wrote:

> Vikas:
>
> Please find attached. At this time I would like to tell you that with the
> current installation, I am able to run mapreduce jobs and pig scripts
> without any installation errors. So please, any suggestions made should not
> break and cascade other installations.
>
> Thanks
>
> Regards,
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <
> kirandkumar2013@gmail.com> wrote:
>
>
> Anand,
> Sometimes it will error out due some resources are not available. So stop
> and start the hadoop cluster and see
> On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:
>
> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


  

Re: Unable to start Hive

Posted by Anand Murali <an...@yahoo.com>.
Vikas:
Please find attached. At this time I would like to tell you that with the current installation, I am able to run mapreduce jobs and pig scripts without any installation errors. So please, any suggestions made should not break and cascade other installations.
Thanks
Regards,
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


     On Friday, May 15, 2015 12:31 PM, Kiran Dangeti <ki...@gmail.com> wrote:
   

 Anand,Sometimes it will error out due some resources are not available. So stop and start the hadoop cluster and seeOn May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

Dear All:
I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to connect Hive to it after installation. I run . .hadoop as start-up script which contain environment variables setting. Find below
. ,hadoopexport HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
export PIG_HOME=/home/anand_vihar/pig-0.14.0
export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
export HIVE_HOME=/home/anand_vihar/hive-1.1.0
export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
echo $HADOOP_HOME
echo $JAVA_HOME
echo $HADOOP_INSTALL
echo $PIG_HOME
echo $PIG_INSTALL
echo $PIG_CLASSPATH
echo $HIVE_HOME
echo $PATH

Error
anand_vihar@Latitude-E5540:~$ hive

Logging initialized using configuration in jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in safe mode.
The reported blocks 2 has reached the threshold 0.9990 of total blocks 2. The number of live datanodes 1 has reached the minimum number 0. In safe mode extension. Safe mode will be turned off automatically in 6 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

    at org.apache.hadoop.ipc.Client.call(Client.java:1468)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
    at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
    at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
    at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
    ... 8 more

Can somebody advise.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, IndiaPh: (044)- 28474593/ 43526162 (voicemail)


  

Re: Unable to start Hive

Posted by Kiran Dangeti <ki...@gmail.com>.
Anand,

Sometimes it will error out due some resources are not available. So stop
and start the hadoop cluster and see
On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Unable to start Hive

Posted by Kiran Dangeti <ki...@gmail.com>.
Anand,

Sometimes it will error out due some resources are not available. So stop
and start the hadoop cluster and see
On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Unable to start Hive

Posted by Kiran Dangeti <ki...@gmail.com>.
Anand,

Sometimes it will error out due some resources are not available. So stop
and start the hadoop cluster and see
On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Hi Anand,

Your namenode is in safe mode. Could you please share the namenode logs.

On Fri, May 15, 2015 at 12:22 PM, Anand Murali <an...@yahoo.com>
wrote:

> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Unable to start Hive

Posted by Kiran Dangeti <ki...@gmail.com>.
Anand,

Sometimes it will error out due some resources are not available. So stop
and start the hadoop cluster and see
On May 15, 2015 12:24 PM, "Anand Murali" <an...@yahoo.com> wrote:

> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Hi Anand,

Your namenode is in safe mode. Could you please share the namenode logs.

On Fri, May 15, 2015 at 12:22 PM, Anand Murali <an...@yahoo.com>
wrote:

> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>

Re: Unable to start Hive

Posted by Vikas Parashar <pa...@gmail.com>.
Hi Anand,

Your namenode is in safe mode. Could you please share the namenode logs.

On Fri, May 15, 2015 at 12:22 PM, Anand Murali <an...@yahoo.com>
wrote:

> Dear All:
>
> I am running Hadoop-2.6 (pseudo mode) on Ubuntu 15.04, and trying to
> connect Hive to it after installation. I run . .hadoop as start-up script
> which contain environment variables setting. Find below
>
> *. ,hadoop*
> export HADOOP_HOME=/home/anand_vihar/hadoop-2.6.0
> export JAVA_HOME=/home/anand_vihar/jdk1.7.0_75/
> export HADOOP\_PREFIX=/home/anand_vihar/hadoop-2.6.0
> export HADOOP_INSTALL=/home/anand_vihar/hadoop-2.6.0
> export PIG_HOME=/home/anand_vihar/pig-0.14.0
> export PIG_INSTALL=/home/anand_vihar/pig-0.14.0
> export PIG_CLASSPATH=/home/anand_vihar/hadoop-2.6.0/etc/hadoop/
> export HIVE_HOME=/home/anand_vihar/hive-1.1.0
> export HIVE_INSTALL=/home/anand_vihar/hive-1.1.0
> export
> PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$HADOOP_HOME:$JAVA_HOME:$PIG_INSTALL/bin:$PIG_CLASSPATH:$HIVE_HOME:$HIVE_INSTALL/bin
> echo $HADOOP_HOME
> echo $JAVA_HOME
> echo $HADOOP_INSTALL
> echo $PIG_HOME
> echo $PIG_INSTALL
> echo $PIG_CLASSPATH
> echo $HIVE_HOME
> echo $PATH
>
>
> *Error*
>
> anand_vihar@Latitude-E5540:~$ hive
>
> Logging initialized using configuration in
> jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-common-1.1.0.jar!/hive-log4j.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/anand_vihar/hive-1.1.0/lib/hive-jdbc-1.1.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:472)
>     at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by:
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):
> Cannot create directory
> /tmp/hive/anand_vihar/a9eb2cf7-9890-4ec3-af6c-ae0c40d9e9d7. Name node is in
> safe mode.
> The reported blocks 2 has reached the threshold 0.9990 of total blocks 2.
> The number of live datanodes 1 has reached the minimum number 0. In safe
> mode extension. Safe mode will be turned off automatically in 6 seconds.
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1364)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4216)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:600)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1468)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>     at com.sun.proxy.$Proxy13.mkdirs(Unknown Source)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:539)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:606)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>     at com.sun.proxy.$Proxy14.mkdirs(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2753)
>     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2724)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:870)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:866)
>     at
> org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:859)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:584)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:526)
>     at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:458)
>     ... 8 more
>
> Can somebody advise.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>