You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Mahmood Naderan <nt...@yahoo.com> on 2015/04/30 07:53:13 UTC

ipc.client RetryUpToMaximumCountWithFixedSleep

Hi,when I run the following command, I get ipc.client timeout error.
[mahmood@tiger Index]$ java -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index
15/04/30 10:21:03 INFO ipc.Client: Retrying connect to server: localhost.localdomain/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

[mahmood@tiger Index]$ ls
data-Index  genData_Index.sh  indexdata.jar  result  run-Index.sh
[mahmood@tiger Index]$ ls data-Index/
lda_wiki1w_1  lda_wiki1w_2




Hadoop is up according to the report
[mahmood@tiger Index]$ hadoop dfsadmin -report
Warning: $HADOOP_HOME is deprecated.
Configured Capacity: 2953524240384 (2.69 TB)
Present Capacity: 602299084800 (560.93 GB)
DFS Remaining: 601237344256 (559.95 GB)
DFS Used: 1061740544 (1012.55 MB)
DFS Used%: 0.18%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 1 (1 total, 0 dead)
Name: 127.0.0.1:50010
Decommission Status : Normal
Configured Capacity: 2953524240384 (2.69 TB)
DFS Used: 1061740544 (1012.55 MB)
Non DFS Used: 2351225155584 (2.14 TB)
DFS Remaining: 601237344256(559.95 GB)
DFS Used%: 0.04%
DFS Remaining%: 20.36%
Last contact: Thu Apr 30 10:22:03 IRDT 2015


Do you have any idea to fix that?
Regards,
Mahmood

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood N <nt...@yahoo.com>.
Finally I figured out the problem! Things are:
1- What are specified in the command as ADDRESS and PORT
java -jar indexdata.jar result hdfs://ADDRESS:PORT/data-Index
must be the same things as fs.default.name<property>
  <name>fs.default.name</name>
  <value>hdfs://ADDRESS:PORT</value>
</property>


So, having <value>hdfs://hostname:9000</value> and running hdfs://localhost:54310 fails to work (ipc.clinet time out). Even the hadoop report command says namenode is up because it read the fs.default.name


2- After matching ADDRESS and PORT, I got this errorException in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

The problem here is that the destination folder hdfs://hostname:9000/data-Index doesn't exist and the reason is that when I used copyfromlocal, I embedded data-index/ into multiple folders (my script did that though!). So the correct form is hdfs://hostname:9000/path/to/data-Index

Now it is fine.Thanks to Sandeep for motivating me for focusing on the address and port.
Regards,
Mahmood




   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood N <nt...@yahoo.com>.
Finally I figured out the problem! Things are:
1- What are specified in the command as ADDRESS and PORT
java -jar indexdata.jar result hdfs://ADDRESS:PORT/data-Index
must be the same things as fs.default.name<property>
  <name>fs.default.name</name>
  <value>hdfs://ADDRESS:PORT</value>
</property>


So, having <value>hdfs://hostname:9000</value> and running hdfs://localhost:54310 fails to work (ipc.clinet time out). Even the hadoop report command says namenode is up because it read the fs.default.name


2- After matching ADDRESS and PORT, I got this errorException in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

The problem here is that the destination folder hdfs://hostname:9000/data-Index doesn't exist and the reason is that when I used copyfromlocal, I embedded data-index/ into multiple folders (my script did that though!). So the correct form is hdfs://hostname:9000/path/to/data-Index

Now it is fine.Thanks to Sandeep for motivating me for focusing on the address and port.
Regards,
Mahmood




   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood N <nt...@yahoo.com>.
Finally I figured out the problem! Things are:
1- What are specified in the command as ADDRESS and PORT
java -jar indexdata.jar result hdfs://ADDRESS:PORT/data-Index
must be the same things as fs.default.name<property>
  <name>fs.default.name</name>
  <value>hdfs://ADDRESS:PORT</value>
</property>


So, having <value>hdfs://hostname:9000</value> and running hdfs://localhost:54310 fails to work (ipc.clinet time out). Even the hadoop report command says namenode is up because it read the fs.default.name


2- After matching ADDRESS and PORT, I got this errorException in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

The problem here is that the destination folder hdfs://hostname:9000/data-Index doesn't exist and the reason is that when I used copyfromlocal, I embedded data-index/ into multiple folders (my script did that though!). So the correct form is hdfs://hostname:9000/path/to/data-Index

Now it is fine.Thanks to Sandeep for motivating me for focusing on the address and port.
Regards,
Mahmood




   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood N <nt...@yahoo.com>.
Finally I figured out the problem! Things are:
1- What are specified in the command as ADDRESS and PORT
java -jar indexdata.jar result hdfs://ADDRESS:PORT/data-Index
must be the same things as fs.default.name<property>
  <name>fs.default.name</name>
  <value>hdfs://ADDRESS:PORT</value>
</property>


So, having <value>hdfs://hostname:9000</value> and running hdfs://localhost:54310 fails to work (ipc.clinet time out). Even the hadoop report command says namenode is up because it read the fs.default.name


2- After matching ADDRESS and PORT, I got this errorException in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

The problem here is that the destination folder hdfs://hostname:9000/data-Index doesn't exist and the reason is that when I used copyfromlocal, I embedded data-index/ into multiple folders (my script did that though!). So the correct form is hdfs://hostname:9000/path/to/data-Index

Now it is fine.Thanks to Sandeep for motivating me for focusing on the address and port.
Regards,
Mahmood




   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Any idea is greatly appreciated.... Identifying the problem to be either from hadoop side or the third party side is helpful.... Regards,
Mahmood 


     On Friday, May 1, 2015 11:09 AM, Mahmood Naderan <nt...@yahoo.com> wrote:
   

 Hi Rajesh,That was a good point. In my config, I used 
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
</configuration>

So I ran 
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index

This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

Is that a problem with IndexData.jar? Can you help with more detail information so that I can tell the developers to find the bug?
 Regards,
Mahmood



   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Any idea is greatly appreciated.... Identifying the problem to be either from hadoop side or the third party side is helpful.... Regards,
Mahmood 


     On Friday, May 1, 2015 11:09 AM, Mahmood Naderan <nt...@yahoo.com> wrote:
   

 Hi Rajesh,That was a good point. In my config, I used 
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
</configuration>

So I ran 
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index

This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

Is that a problem with IndexData.jar? Can you help with more detail information so that I can tell the developers to find the bug?
 Regards,
Mahmood



   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Any idea is greatly appreciated.... Identifying the problem to be either from hadoop side or the third party side is helpful.... Regards,
Mahmood 


     On Friday, May 1, 2015 11:09 AM, Mahmood Naderan <nt...@yahoo.com> wrote:
   

 Hi Rajesh,That was a good point. In my config, I used 
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
</configuration>

So I ran 
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index

This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

Is that a problem with IndexData.jar? Can you help with more detail information so that I can tell the developers to find the bug?
 Regards,
Mahmood



   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Any idea is greatly appreciated.... Identifying the problem to be either from hadoop side or the third party side is helpful.... Regards,
Mahmood 


     On Friday, May 1, 2015 11:09 AM, Mahmood Naderan <nt...@yahoo.com> wrote:
   

 Hi Rajesh,That was a good point. In my config, I used 
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
</configuration>

So I ran 
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index

This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

Is that a problem with IndexData.jar? Can you help with more detail information so that I can tell the developers to find the bug?
 Regards,
Mahmood



   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Hi Rajesh,That was a good point. In my config, I used 
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
</configuration>

So I ran 
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index

This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

Is that a problem with IndexData.jar? Can you help with more detail information so that I can tell the developers to find the bug?
 Regards,
Mahmood



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Hi Rajesh,That was a good point. In my config, I used 
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
</configuration>

So I ran 
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index

This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

Is that a problem with IndexData.jar? Can you help with more detail information so that I can tell the developers to find the bug?
 Regards,
Mahmood



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Hi Rajesh,That was a good point. In my config, I used 
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
</configuration>

So I ran 
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index

This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

Is that a problem with IndexData.jar? Can you help with more detail information so that I can tell the developers to find the bug?
 Regards,
Mahmood



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Hi Rajesh,That was a good point. In my config, I used 
<configuration>
<property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
</property>
</configuration>

So I ran 
hadoop -jar indexdata.jar `pwd`result hdfs://127.0.0.1:54310/data-Index

This time, I get another error:
Exception in thread "main" java.lang.NullPointerException
        at IndexHDFS.indexData(IndexHDFS.java:92)
        at IndexHDFS.main(IndexHDFS.java:72)

Is that a problem with IndexData.jar? Can you help with more detail information so that I can tell the developers to find the bug?
 Regards,
Mahmood



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Rajesh Kartha <ka...@gmail.com>.
Curious, did you check fs.defaultFS in the core-site.xml ? Just to make
sure the HDFS port is 9000 and not 8020

-Rajesh

On Thu, Apr 30, 2015 at 4:42 AM, Mahmood Naderan <nt...@yahoo.com>
wrote:

> I found out that the $JAVA_HOME specified in hadoop-env.sh was different
> from "java -version" in the command line. So I fix the variable to point to
> JAVA_1.7 (the jar file is also written with 1.7)
>
> Still I get ipc.client error but this time it sound different. The whole
> output (in verbose mode) is available at http://pastebin.com/A7SzcqBD
>
> You can see in the bottom that hadoop is up and works properly. It is
> really an annoying message. Do you have any idea about that? I faced that
> problem before but at that time, the report command showed that datanode
> was off.
>
> This time I see the datanode is up so I really wonder how to overcome this
> annoying error!
>
> Regards,
> Mahmood
>
>
>
>
>
>
>
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Rajesh Kartha <ka...@gmail.com>.
Curious, did you check fs.defaultFS in the core-site.xml ? Just to make
sure the HDFS port is 9000 and not 8020

-Rajesh

On Thu, Apr 30, 2015 at 4:42 AM, Mahmood Naderan <nt...@yahoo.com>
wrote:

> I found out that the $JAVA_HOME specified in hadoop-env.sh was different
> from "java -version" in the command line. So I fix the variable to point to
> JAVA_1.7 (the jar file is also written with 1.7)
>
> Still I get ipc.client error but this time it sound different. The whole
> output (in verbose mode) is available at http://pastebin.com/A7SzcqBD
>
> You can see in the bottom that hadoop is up and works properly. It is
> really an annoying message. Do you have any idea about that? I faced that
> problem before but at that time, the report command showed that datanode
> was off.
>
> This time I see the datanode is up so I really wonder how to overcome this
> annoying error!
>
> Regards,
> Mahmood
>
>
>
>
>
>
>
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Rajesh Kartha <ka...@gmail.com>.
Curious, did you check fs.defaultFS in the core-site.xml ? Just to make
sure the HDFS port is 9000 and not 8020

-Rajesh

On Thu, Apr 30, 2015 at 4:42 AM, Mahmood Naderan <nt...@yahoo.com>
wrote:

> I found out that the $JAVA_HOME specified in hadoop-env.sh was different
> from "java -version" in the command line. So I fix the variable to point to
> JAVA_1.7 (the jar file is also written with 1.7)
>
> Still I get ipc.client error but this time it sound different. The whole
> output (in verbose mode) is available at http://pastebin.com/A7SzcqBD
>
> You can see in the bottom that hadoop is up and works properly. It is
> really an annoying message. Do you have any idea about that? I faced that
> problem before but at that time, the report command showed that datanode
> was off.
>
> This time I see the datanode is up so I really wonder how to overcome this
> annoying error!
>
> Regards,
> Mahmood
>
>
>
>
>
>
>
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Rajesh Kartha <ka...@gmail.com>.
Curious, did you check fs.defaultFS in the core-site.xml ? Just to make
sure the HDFS port is 9000 and not 8020

-Rajesh

On Thu, Apr 30, 2015 at 4:42 AM, Mahmood Naderan <nt...@yahoo.com>
wrote:

> I found out that the $JAVA_HOME specified in hadoop-env.sh was different
> from "java -version" in the command line. So I fix the variable to point to
> JAVA_1.7 (the jar file is also written with 1.7)
>
> Still I get ipc.client error but this time it sound different. The whole
> output (in verbose mode) is available at http://pastebin.com/A7SzcqBD
>
> You can see in the bottom that hadoop is up and works properly. It is
> really an annoying message. Do you have any idea about that? I faced that
> problem before but at that time, the report command showed that datanode
> was off.
>
> This time I see the datanode is up so I really wonder how to overcome this
> annoying error!
>
> Regards,
> Mahmood
>
>
>
>
>
>
>
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
I found out that the $JAVA_HOME specified in hadoop-env.sh was different from "java -version" in the command line. So I fix the variable to point to JAVA_1.7 (the jar file is also written with 1.7)
Still I get ipc.client error but this time it sound different. The whole output (in verbose mode) is available at http://pastebin.com/A7SzcqBD

You can see in the bottom that hadoop is up and works properly. It is really an annoying message. Do you have any idea about that? I faced that problem before but at that time, the report command showed that datanode was off. 

This time I see the datanode is up so I really wonder how to overcome this annoying error!
 Regards,
Mahmood




   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
I found out that the $JAVA_HOME specified in hadoop-env.sh was different from "java -version" in the command line. So I fix the variable to point to JAVA_1.7 (the jar file is also written with 1.7)
Still I get ipc.client error but this time it sound different. The whole output (in verbose mode) is available at http://pastebin.com/A7SzcqBD

You can see in the bottom that hadoop is up and works properly. It is really an annoying message. Do you have any idea about that? I faced that problem before but at that time, the report command showed that datanode was off. 

This time I see the datanode is up so I really wonder how to overcome this annoying error!
 Regards,
Mahmood




   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
I found out that the $JAVA_HOME specified in hadoop-env.sh was different from "java -version" in the command line. So I fix the variable to point to JAVA_1.7 (the jar file is also written with 1.7)
Still I get ipc.client error but this time it sound different. The whole output (in verbose mode) is available at http://pastebin.com/A7SzcqBD

You can see in the bottom that hadoop is up and works properly. It is really an annoying message. Do you have any idea about that? I faced that problem before but at that time, the report command showed that datanode was off. 

This time I see the datanode is up so I really wonder how to overcome this annoying error!
 Regards,
Mahmood




   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
I found out that the $JAVA_HOME specified in hadoop-env.sh was different from "java -version" in the command line. So I fix the variable to point to JAVA_1.7 (the jar file is also written with 1.7)
Still I get ipc.client error but this time it sound different. The whole output (in verbose mode) is available at http://pastebin.com/A7SzcqBD

You can see in the bottom that hadoop is up and works properly. It is really an annoying message. Do you have any idea about that? I faced that problem before but at that time, the report command showed that datanode was off. 

This time I see the datanode is up so I really wonder how to overcome this annoying error!
 Regards,
Mahmood




   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
I don't think that is a not he main issue because I found that the versions are the same.
On my machine which runs hadoop 1.2.0
[mahmood@tiger Index]$ java -versionjava version "1.7.0_71"
OpenJDK Runtime Environment (rhel-2.5.3.1.el6-x86_64 u71-b14)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)


According to a discussion on SO, http://goo.gl/9cg4xg, I extracted the jar file and found that version of class data is 51. That means the java which compiled the class file was 1.7 (I mean the java version of developers) 
[mahmood@tiger Index]$ jar xf indexdata.jar
[mahmood@tiger Index]$ file IndexData.class
IndexData.class: compiled Java class data, version 51.0
 Regards,
Mahmood 



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
I don't think that is a not he main issue because I found that the versions are the same.
On my machine which runs hadoop 1.2.0
[mahmood@tiger Index]$ java -versionjava version "1.7.0_71"
OpenJDK Runtime Environment (rhel-2.5.3.1.el6-x86_64 u71-b14)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)


According to a discussion on SO, http://goo.gl/9cg4xg, I extracted the jar file and found that version of class data is 51. That means the java which compiled the class file was 1.7 (I mean the java version of developers) 
[mahmood@tiger Index]$ jar xf indexdata.jar
[mahmood@tiger Index]$ file IndexData.class
IndexData.class: compiled Java class data, version 51.0
 Regards,
Mahmood 



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
I don't think that is a not he main issue because I found that the versions are the same.
On my machine which runs hadoop 1.2.0
[mahmood@tiger Index]$ java -versionjava version "1.7.0_71"
OpenJDK Runtime Environment (rhel-2.5.3.1.el6-x86_64 u71-b14)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)


According to a discussion on SO, http://goo.gl/9cg4xg, I extracted the jar file and found that version of class data is 51. That means the java which compiled the class file was 1.7 (I mean the java version of developers) 
[mahmood@tiger Index]$ jar xf indexdata.jar
[mahmood@tiger Index]$ file IndexData.class
IndexData.class: compiled Java class data, version 51.0
 Regards,
Mahmood 



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
I don't think that is a not he main issue because I found that the versions are the same.
On my machine which runs hadoop 1.2.0
[mahmood@tiger Index]$ java -versionjava version "1.7.0_71"
OpenJDK Runtime Environment (rhel-2.5.3.1.el6-x86_64 u71-b14)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)


According to a discussion on SO, http://goo.gl/9cg4xg, I extracted the jar file and found that version of class data is 51. That means the java which compiled the class file was 1.7 (I mean the java version of developers) 
[mahmood@tiger Index]$ jar xf indexdata.jar
[mahmood@tiger Index]$ file IndexData.class
IndexData.class: compiled Java class data, version 51.0
 Regards,
Mahmood 



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Ashutosh Kumar <as...@gmail.com>.
Looks like your java version is lower than used for creation of jar file.
Can you recompile and create jar in your env ? or upgrade your java
version?

On Thu, Apr 30, 2015 at 1:20 PM, Mahmood Naderan <nt...@yahoo.com>
wrote:

> There was a syntax error in the previous post. The correct is:
>
> [mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index
> Warning: $HADOOP_HOME is deprecated.
>
> Exception in thread "main" java.lang.UnsupportedClassVersionError:
> IndexHDFS : Unsupported major.minor version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:274)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
>
>
> Regards,
> Mahmood
>
>
>
>   On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan <
> nt_mahmood@yahoo.com> wrote:
>
>
> Can you explain more?
> To be honest, I am running a third party script (not mine) and the
> developers have no idea on the error.
>
> Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
>
> [mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://
> 127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.
>
> Exception in thread "main" java.lang.UnsupportedClassVersionError:
> IndexHDFS : Unsupported major.minor version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:274)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
>
>
>
>
> Regards,
> Mahmood
>
>
>
>
>
>
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Ashutosh Kumar <as...@gmail.com>.
Looks like your java version is lower than used for creation of jar file.
Can you recompile and create jar in your env ? or upgrade your java
version?

On Thu, Apr 30, 2015 at 1:20 PM, Mahmood Naderan <nt...@yahoo.com>
wrote:

> There was a syntax error in the previous post. The correct is:
>
> [mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index
> Warning: $HADOOP_HOME is deprecated.
>
> Exception in thread "main" java.lang.UnsupportedClassVersionError:
> IndexHDFS : Unsupported major.minor version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:274)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
>
>
> Regards,
> Mahmood
>
>
>
>   On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan <
> nt_mahmood@yahoo.com> wrote:
>
>
> Can you explain more?
> To be honest, I am running a third party script (not mine) and the
> developers have no idea on the error.
>
> Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
>
> [mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://
> 127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.
>
> Exception in thread "main" java.lang.UnsupportedClassVersionError:
> IndexHDFS : Unsupported major.minor version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:274)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
>
>
>
>
> Regards,
> Mahmood
>
>
>
>
>
>
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Ashutosh Kumar <as...@gmail.com>.
Looks like your java version is lower than used for creation of jar file.
Can you recompile and create jar in your env ? or upgrade your java
version?

On Thu, Apr 30, 2015 at 1:20 PM, Mahmood Naderan <nt...@yahoo.com>
wrote:

> There was a syntax error in the previous post. The correct is:
>
> [mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index
> Warning: $HADOOP_HOME is deprecated.
>
> Exception in thread "main" java.lang.UnsupportedClassVersionError:
> IndexHDFS : Unsupported major.minor version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:274)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
>
>
> Regards,
> Mahmood
>
>
>
>   On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan <
> nt_mahmood@yahoo.com> wrote:
>
>
> Can you explain more?
> To be honest, I am running a third party script (not mine) and the
> developers have no idea on the error.
>
> Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
>
> [mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://
> 127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.
>
> Exception in thread "main" java.lang.UnsupportedClassVersionError:
> IndexHDFS : Unsupported major.minor version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:274)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
>
>
>
>
> Regards,
> Mahmood
>
>
>
>
>
>
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Ashutosh Kumar <as...@gmail.com>.
Looks like your java version is lower than used for creation of jar file.
Can you recompile and create jar in your env ? or upgrade your java
version?

On Thu, Apr 30, 2015 at 1:20 PM, Mahmood Naderan <nt...@yahoo.com>
wrote:

> There was a syntax error in the previous post. The correct is:
>
> [mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index
> Warning: $HADOOP_HOME is deprecated.
>
> Exception in thread "main" java.lang.UnsupportedClassVersionError:
> IndexHDFS : Unsupported major.minor version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:274)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
>
>
> Regards,
> Mahmood
>
>
>
>   On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan <
> nt_mahmood@yahoo.com> wrote:
>
>
> Can you explain more?
> To be honest, I am running a third party script (not mine) and the
> developers have no idea on the error.
>
> Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
>
> [mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://
> 127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.
>
> Exception in thread "main" java.lang.UnsupportedClassVersionError:
> IndexHDFS : Unsupported major.minor version 51.0
>         at java.lang.ClassLoader.defineClass1(Native Method)
>         at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
>         at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
>         at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
>         at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
>         at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:274)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:149)
>
>
>
>
> Regards,
> Mahmood
>
>
>
>
>
>
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
There was a syntax error in the previous post. The correct is:
[mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index
Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

 Regards,
Mahmood 


     On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan <nt...@yahoo.com> wrote:
   

 Can you explain more?To be honest, I am running a third party script (not mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


 Regards,
Mahmood 



   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
There was a syntax error in the previous post. The correct is:
[mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index
Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

 Regards,
Mahmood 


     On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan <nt...@yahoo.com> wrote:
   

 Can you explain more?To be honest, I am running a third party script (not mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


 Regards,
Mahmood 



   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
There was a syntax error in the previous post. The correct is:
[mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index
Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

 Regards,
Mahmood 


     On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan <nt...@yahoo.com> wrote:
   

 Can you explain more?To be honest, I am running a third party script (not mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


 Regards,
Mahmood 



   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
There was a syntax error in the previous post. The correct is:
[mahmood@tiger Index]$ hadoop jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index
Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

 Regards,
Mahmood 


     On Thursday, April 30, 2015 12:17 PM, Mahmood Naderan <nt...@yahoo.com> wrote:
   

 Can you explain more?To be honest, I am running a third party script (not mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


 Regards,
Mahmood 



   

  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Can you explain more?To be honest, I am running a third party script (not mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


 Regards,
Mahmood 



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Can you explain more?To be honest, I am running a third party script (not mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


 Regards,
Mahmood 



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Can you explain more?To be honest, I am running a third party script (not mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


 Regards,
Mahmood 



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Mahmood Naderan <nt...@yahoo.com>.
Can you explain more?To be honest, I am running a third party script (not mine) and the developers have no idea on the error.
Do you mean that running "hadoop -jar indexdata.jar `pwd`/result hdfs://127.0.0.1:9000/data-Index" is a better one? For that, I get this error:
[mahmood@tiger Index]$ hadoop jar indexdata.jar ${WORK_DIR}/result hdfs://127.0.0.1:9000/data-Index Warning: $HADOOP_HOME is deprecated.

Exception in thread "main" java.lang.UnsupportedClassVersionError: IndexHDFS : Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:643)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:277)
        at java.net.URLClassLoader.access$000(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:212)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:323)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:268)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:274)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:149)


 Regards,
Mahmood 



  

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Chris Mawata <ch...@gmail.com>.
You are running it with the Java command rather than hadopp jar ... Do you
have a mechanism inside your Java code to find hadoop like creating your
own Configuration?
On Apr 30, 2015 1:54 AM, "Mahmood Naderan" <nt...@yahoo.com> wrote:

> Hi,
> when I run the following command, I get ipc.client timeout error.
>
> [mahmood@tiger Index]$ java -jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index
> 15/04/30 10:21:03 INFO ipc.Client: Retrying connect to server:
> localhost.localdomain/127.0.0.1:9000. Already tried 0 time(s); retry
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1
> SECONDS)
>
> [mahmood@tiger Index]$ ls
> data-Index  genData_Index.sh  indexdata.jar  result  run-Index.sh
> [mahmood@tiger Index]$ ls data-Index/
> lda_wiki1w_1  lda_wiki1w_2
>
>
>
>
> Hadoop is up according to the report
>
> [mahmood@tiger Index]$ hadoop dfsadmin -report
> Warning: $HADOOP_HOME is deprecated.
> Configured Capacity: 2953524240384 (2.69 TB)
> Present Capacity: 602299084800 (560.93 GB)
> DFS Remaining: 601237344256 (559.95 GB)
> DFS Used: 1061740544 (1012.55 MB)
> DFS Used%: 0.18%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
> Name: 127.0.0.1:50010
> Decommission Status : Normal
> Configured Capacity: 2953524240384 (2.69 TB)
> DFS Used: 1061740544 (1012.55 MB)
> Non DFS Used: 2351225155584 (2.14 TB)
> DFS Remaining: 601237344256(559.95 GB)
> DFS Used%: 0.04%
> DFS Remaining%: 20.36%
> Last contact: Thu Apr 30 10:22:03 IRDT 2015
>
>
> Do you have any idea to fix that?
> Regards,
> Mahmood
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Chris Mawata <ch...@gmail.com>.
You are running it with the Java command rather than hadopp jar ... Do you
have a mechanism inside your Java code to find hadoop like creating your
own Configuration?
On Apr 30, 2015 1:54 AM, "Mahmood Naderan" <nt...@yahoo.com> wrote:

> Hi,
> when I run the following command, I get ipc.client timeout error.
>
> [mahmood@tiger Index]$ java -jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index
> 15/04/30 10:21:03 INFO ipc.Client: Retrying connect to server:
> localhost.localdomain/127.0.0.1:9000. Already tried 0 time(s); retry
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1
> SECONDS)
>
> [mahmood@tiger Index]$ ls
> data-Index  genData_Index.sh  indexdata.jar  result  run-Index.sh
> [mahmood@tiger Index]$ ls data-Index/
> lda_wiki1w_1  lda_wiki1w_2
>
>
>
>
> Hadoop is up according to the report
>
> [mahmood@tiger Index]$ hadoop dfsadmin -report
> Warning: $HADOOP_HOME is deprecated.
> Configured Capacity: 2953524240384 (2.69 TB)
> Present Capacity: 602299084800 (560.93 GB)
> DFS Remaining: 601237344256 (559.95 GB)
> DFS Used: 1061740544 (1012.55 MB)
> DFS Used%: 0.18%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
> Name: 127.0.0.1:50010
> Decommission Status : Normal
> Configured Capacity: 2953524240384 (2.69 TB)
> DFS Used: 1061740544 (1012.55 MB)
> Non DFS Used: 2351225155584 (2.14 TB)
> DFS Remaining: 601237344256(559.95 GB)
> DFS Used%: 0.04%
> DFS Remaining%: 20.36%
> Last contact: Thu Apr 30 10:22:03 IRDT 2015
>
>
> Do you have any idea to fix that?
> Regards,
> Mahmood
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Chris Mawata <ch...@gmail.com>.
You are running it with the Java command rather than hadopp jar ... Do you
have a mechanism inside your Java code to find hadoop like creating your
own Configuration?
On Apr 30, 2015 1:54 AM, "Mahmood Naderan" <nt...@yahoo.com> wrote:

> Hi,
> when I run the following command, I get ipc.client timeout error.
>
> [mahmood@tiger Index]$ java -jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index
> 15/04/30 10:21:03 INFO ipc.Client: Retrying connect to server:
> localhost.localdomain/127.0.0.1:9000. Already tried 0 time(s); retry
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1
> SECONDS)
>
> [mahmood@tiger Index]$ ls
> data-Index  genData_Index.sh  indexdata.jar  result  run-Index.sh
> [mahmood@tiger Index]$ ls data-Index/
> lda_wiki1w_1  lda_wiki1w_2
>
>
>
>
> Hadoop is up according to the report
>
> [mahmood@tiger Index]$ hadoop dfsadmin -report
> Warning: $HADOOP_HOME is deprecated.
> Configured Capacity: 2953524240384 (2.69 TB)
> Present Capacity: 602299084800 (560.93 GB)
> DFS Remaining: 601237344256 (559.95 GB)
> DFS Used: 1061740544 (1012.55 MB)
> DFS Used%: 0.18%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
> Name: 127.0.0.1:50010
> Decommission Status : Normal
> Configured Capacity: 2953524240384 (2.69 TB)
> DFS Used: 1061740544 (1012.55 MB)
> Non DFS Used: 2351225155584 (2.14 TB)
> DFS Remaining: 601237344256(559.95 GB)
> DFS Used%: 0.04%
> DFS Remaining%: 20.36%
> Last contact: Thu Apr 30 10:22:03 IRDT 2015
>
>
> Do you have any idea to fix that?
> Regards,
> Mahmood
>

Re: ipc.client RetryUpToMaximumCountWithFixedSleep

Posted by Chris Mawata <ch...@gmail.com>.
You are running it with the Java command rather than hadopp jar ... Do you
have a mechanism inside your Java code to find hadoop like creating your
own Configuration?
On Apr 30, 2015 1:54 AM, "Mahmood Naderan" <nt...@yahoo.com> wrote:

> Hi,
> when I run the following command, I get ipc.client timeout error.
>
> [mahmood@tiger Index]$ java -jar indexdata.jar `pwd`/result hdfs://
> 127.0.0.1:9000/data-Index
> 15/04/30 10:21:03 INFO ipc.Client: Retrying connect to server:
> localhost.localdomain/127.0.0.1:9000. Already tried 0 time(s); retry
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1
> SECONDS)
>
> [mahmood@tiger Index]$ ls
> data-Index  genData_Index.sh  indexdata.jar  result  run-Index.sh
> [mahmood@tiger Index]$ ls data-Index/
> lda_wiki1w_1  lda_wiki1w_2
>
>
>
>
> Hadoop is up according to the report
>
> [mahmood@tiger Index]$ hadoop dfsadmin -report
> Warning: $HADOOP_HOME is deprecated.
> Configured Capacity: 2953524240384 (2.69 TB)
> Present Capacity: 602299084800 (560.93 GB)
> DFS Remaining: 601237344256 (559.95 GB)
> DFS Used: 1061740544 (1012.55 MB)
> DFS Used%: 0.18%
> Under replicated blocks: 0
> Blocks with corrupt replicas: 0
> Missing blocks: 0
> -------------------------------------------------
> Datanodes available: 1 (1 total, 0 dead)
> Name: 127.0.0.1:50010
> Decommission Status : Normal
> Configured Capacity: 2953524240384 (2.69 TB)
> DFS Used: 1061740544 (1012.55 MB)
> Non DFS Used: 2351225155584 (2.14 TB)
> DFS Remaining: 601237344256(559.95 GB)
> DFS Used%: 0.04%
> DFS Remaining%: 20.36%
> Last contact: Thu Apr 30 10:22:03 IRDT 2015
>
>
> Do you have any idea to fix that?
> Regards,
> Mahmood
>