You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ch huang <ju...@gmail.com> on 2014/11/05 10:19:47 UTC
issue about pig can not know HDFS HA configuration
hi,maillist:
i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
2014-11-05 14:34:54,710 [JobControl] INFO
org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
2014-11-05 14:34:54,716 [JobControl] WARN
org.apache.hadoop.security.UserGroupInformation -
PriviledgedActionException as:root (auth:SIMPLE)
cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
java.net.UnknownHostException: develop
2014-11-05 14:34:54,717 [JobControl] INFO
org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
PigLatin:DefaultJobName got an error while submitting
org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
java.net.UnknownHostException: develop
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at
org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at
org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at
org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread.java:744)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by: java.lang.IllegalArgumentException:
java.net.UnknownHostException: develop
at
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
at
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
at
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
at
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at
org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
at
org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
... 18 more
Caused by: java.net.UnknownHostException: develop
... 33 more
Re: issue about pig can not know HDFS HA configuration
Posted by Jagannath Naidu <ja...@fosteringlinux.com>.
got it, thanks
On 05/11/2014, ch huang <ju...@gmail.com> wrote:
> this name is not a host name ,it is NN HA service name, behind the name ,is
> two NN box, one for active node , one for standby node
>
> On Wed, Nov 5, 2014 at 7:41 PM, Jagannath Naidu <
> jagannath.naidu@fosteringlinux.com> wrote:
>
>>
>>
>> On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
>>
>>> hi,maillist:
>>> i set namenode HA in my HDFS cluster,but seems pig can not know it
>>> ,why?
>>>
>>> 2014-11-05 14:34:54,710 [JobControl] INFO
>>> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
>>> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
>>> 2014-11-05 14:34:54,716 [JobControl] WARN
>>> org.apache.hadoop.security.UserGroupInformation -
>>> PriviledgedActionException as:root (auth:SIMPLE)
>>> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>>> java.net.UnknownHostException: develop
>>>
>>
>> unknown host exception, this can be the issue. Check that the host is
>> discoverable either form dns or from hosts.
>>
>>
>>> 2014-11-05 14:34:54,717 [JobControl] INFO
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
>>> PigLatin:DefaultJobName got an error while submitting
>>> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>>> java.net.UnknownHostException: develop
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
>>> at
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
>>> at
>>> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
>>> at java.lang.Thread.run(Thread.java:744)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
>>> Caused by: java.lang.IllegalArgumentException:
>>> java.net.UnknownHostException: develop
>>> at
>>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>>> at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>>> at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>>> ... 18 more
>>> Caused by: java.net.UnknownHostException: develop
>>> ... 33 more
>>>
>>>
>>
>>
>> --
>>
>> Jaggu Naidu
>>
>
--
Thanks & Regards
B Jagannath
+919871324006
Re: issue about pig can not know HDFS HA configuration
Posted by Jagannath Naidu <ja...@fosteringlinux.com>.
got it, thanks
On 05/11/2014, ch huang <ju...@gmail.com> wrote:
> this name is not a host name ,it is NN HA service name, behind the name ,is
> two NN box, one for active node , one for standby node
>
> On Wed, Nov 5, 2014 at 7:41 PM, Jagannath Naidu <
> jagannath.naidu@fosteringlinux.com> wrote:
>
>>
>>
>> On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
>>
>>> hi,maillist:
>>> i set namenode HA in my HDFS cluster,but seems pig can not know it
>>> ,why?
>>>
>>> 2014-11-05 14:34:54,710 [JobControl] INFO
>>> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
>>> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
>>> 2014-11-05 14:34:54,716 [JobControl] WARN
>>> org.apache.hadoop.security.UserGroupInformation -
>>> PriviledgedActionException as:root (auth:SIMPLE)
>>> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>>> java.net.UnknownHostException: develop
>>>
>>
>> unknown host exception, this can be the issue. Check that the host is
>> discoverable either form dns or from hosts.
>>
>>
>>> 2014-11-05 14:34:54,717 [JobControl] INFO
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
>>> PigLatin:DefaultJobName got an error while submitting
>>> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>>> java.net.UnknownHostException: develop
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
>>> at
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
>>> at
>>> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
>>> at java.lang.Thread.run(Thread.java:744)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
>>> Caused by: java.lang.IllegalArgumentException:
>>> java.net.UnknownHostException: develop
>>> at
>>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>>> at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>>> at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>>> ... 18 more
>>> Caused by: java.net.UnknownHostException: develop
>>> ... 33 more
>>>
>>>
>>
>>
>> --
>>
>> Jaggu Naidu
>>
>
--
Thanks & Regards
B Jagannath
+919871324006
Re: issue about pig can not know HDFS HA configuration
Posted by Jagannath Naidu <ja...@fosteringlinux.com>.
got it, thanks
On 05/11/2014, ch huang <ju...@gmail.com> wrote:
> this name is not a host name ,it is NN HA service name, behind the name ,is
> two NN box, one for active node , one for standby node
>
> On Wed, Nov 5, 2014 at 7:41 PM, Jagannath Naidu <
> jagannath.naidu@fosteringlinux.com> wrote:
>
>>
>>
>> On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
>>
>>> hi,maillist:
>>> i set namenode HA in my HDFS cluster,but seems pig can not know it
>>> ,why?
>>>
>>> 2014-11-05 14:34:54,710 [JobControl] INFO
>>> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
>>> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
>>> 2014-11-05 14:34:54,716 [JobControl] WARN
>>> org.apache.hadoop.security.UserGroupInformation -
>>> PriviledgedActionException as:root (auth:SIMPLE)
>>> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>>> java.net.UnknownHostException: develop
>>>
>>
>> unknown host exception, this can be the issue. Check that the host is
>> discoverable either form dns or from hosts.
>>
>>
>>> 2014-11-05 14:34:54,717 [JobControl] INFO
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
>>> PigLatin:DefaultJobName got an error while submitting
>>> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>>> java.net.UnknownHostException: develop
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
>>> at
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
>>> at
>>> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
>>> at java.lang.Thread.run(Thread.java:744)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
>>> Caused by: java.lang.IllegalArgumentException:
>>> java.net.UnknownHostException: develop
>>> at
>>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>>> at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>>> at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>>> ... 18 more
>>> Caused by: java.net.UnknownHostException: develop
>>> ... 33 more
>>>
>>>
>>
>>
>> --
>>
>> Jaggu Naidu
>>
>
--
Thanks & Regards
B Jagannath
+919871324006
Re: issue about pig can not know HDFS HA configuration
Posted by Jagannath Naidu <ja...@fosteringlinux.com>.
got it, thanks
On 05/11/2014, ch huang <ju...@gmail.com> wrote:
> this name is not a host name ,it is NN HA service name, behind the name ,is
> two NN box, one for active node , one for standby node
>
> On Wed, Nov 5, 2014 at 7:41 PM, Jagannath Naidu <
> jagannath.naidu@fosteringlinux.com> wrote:
>
>>
>>
>> On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
>>
>>> hi,maillist:
>>> i set namenode HA in my HDFS cluster,but seems pig can not know it
>>> ,why?
>>>
>>> 2014-11-05 14:34:54,710 [JobControl] INFO
>>> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
>>> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
>>> 2014-11-05 14:34:54,716 [JobControl] WARN
>>> org.apache.hadoop.security.UserGroupInformation -
>>> PriviledgedActionException as:root (auth:SIMPLE)
>>> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>>> java.net.UnknownHostException: develop
>>>
>>
>> unknown host exception, this can be the issue. Check that the host is
>> discoverable either form dns or from hosts.
>>
>>
>>> 2014-11-05 14:34:54,717 [JobControl] INFO
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
>>> PigLatin:DefaultJobName got an error while submitting
>>> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>>> java.net.UnknownHostException: develop
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>>> at
>>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
>>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.Subject.doAs(Subject.java:415)
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
>>> at
>>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
>>> at
>>> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
>>> at java.lang.Thread.run(Thread.java:744)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
>>> Caused by: java.lang.IllegalArgumentException:
>>> java.net.UnknownHostException: develop
>>> at
>>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
>>> at
>>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>>> at
>>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>>> at
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>>> at
>>> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>>> at
>>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
>>> at
>>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
>>> at
>>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>>> ... 18 more
>>> Caused by: java.net.UnknownHostException: develop
>>> ... 33 more
>>>
>>>
>>
>>
>> --
>>
>> Jaggu Naidu
>>
>
--
Thanks & Regards
B Jagannath
+919871324006
Re: issue about pig can not know HDFS HA configuration
Posted by ch huang <ju...@gmail.com>.
this name is not a host name ,it is NN HA service name, behind the name ,is
two NN box, one for active node , one for standby node
On Wed, Nov 5, 2014 at 7:41 PM, Jagannath Naidu <
jagannath.naidu@fosteringlinux.com> wrote:
>
>
> On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
>
>> hi,maillist:
>> i set namenode HA in my HDFS cluster,but seems pig can not know it
>> ,why?
>>
>> 2014-11-05 14:34:54,710 [JobControl] INFO
>> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
>> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
>> 2014-11-05 14:34:54,716 [JobControl] WARN
>> org.apache.hadoop.security.UserGroupInformation -
>> PriviledgedActionException as:root (auth:SIMPLE)
>> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>> java.net.UnknownHostException: develop
>>
>
> unknown host exception, this can be the issue. Check that the host is
> discoverable either form dns or from hosts.
>
>
>> 2014-11-05 14:34:54,717 [JobControl] INFO
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
>> PigLatin:DefaultJobName got an error while submitting
>> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>> java.net.UnknownHostException: develop
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
>> at
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
>> at
>> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
>> at java.lang.Thread.run(Thread.java:744)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
>> Caused by: java.lang.IllegalArgumentException:
>> java.net.UnknownHostException: develop
>> at
>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>> at
>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
>> at
>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>> at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>> at
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>> at
>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
>> at
>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>> ... 18 more
>> Caused by: java.net.UnknownHostException: develop
>> ... 33 more
>>
>>
>
>
> --
>
> Jaggu Naidu
>
Re: issue about pig can not know HDFS HA configuration
Posted by ch huang <ju...@gmail.com>.
this name is not a host name ,it is NN HA service name, behind the name ,is
two NN box, one for active node , one for standby node
On Wed, Nov 5, 2014 at 7:41 PM, Jagannath Naidu <
jagannath.naidu@fosteringlinux.com> wrote:
>
>
> On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
>
>> hi,maillist:
>> i set namenode HA in my HDFS cluster,but seems pig can not know it
>> ,why?
>>
>> 2014-11-05 14:34:54,710 [JobControl] INFO
>> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
>> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
>> 2014-11-05 14:34:54,716 [JobControl] WARN
>> org.apache.hadoop.security.UserGroupInformation -
>> PriviledgedActionException as:root (auth:SIMPLE)
>> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>> java.net.UnknownHostException: develop
>>
>
> unknown host exception, this can be the issue. Check that the host is
> discoverable either form dns or from hosts.
>
>
>> 2014-11-05 14:34:54,717 [JobControl] INFO
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
>> PigLatin:DefaultJobName got an error while submitting
>> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>> java.net.UnknownHostException: develop
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
>> at
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
>> at
>> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
>> at java.lang.Thread.run(Thread.java:744)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
>> Caused by: java.lang.IllegalArgumentException:
>> java.net.UnknownHostException: develop
>> at
>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>> at
>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
>> at
>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>> at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>> at
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>> at
>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
>> at
>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>> ... 18 more
>> Caused by: java.net.UnknownHostException: develop
>> ... 33 more
>>
>>
>
>
> --
>
> Jaggu Naidu
>
Re: issue about pig can not know HDFS HA configuration
Posted by ch huang <ju...@gmail.com>.
this name is not a host name ,it is NN HA service name, behind the name ,is
two NN box, one for active node , one for standby node
On Wed, Nov 5, 2014 at 7:41 PM, Jagannath Naidu <
jagannath.naidu@fosteringlinux.com> wrote:
>
>
> On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
>
>> hi,maillist:
>> i set namenode HA in my HDFS cluster,but seems pig can not know it
>> ,why?
>>
>> 2014-11-05 14:34:54,710 [JobControl] INFO
>> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
>> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
>> 2014-11-05 14:34:54,716 [JobControl] WARN
>> org.apache.hadoop.security.UserGroupInformation -
>> PriviledgedActionException as:root (auth:SIMPLE)
>> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>> java.net.UnknownHostException: develop
>>
>
> unknown host exception, this can be the issue. Check that the host is
> discoverable either form dns or from hosts.
>
>
>> 2014-11-05 14:34:54,717 [JobControl] INFO
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
>> PigLatin:DefaultJobName got an error while submitting
>> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>> java.net.UnknownHostException: develop
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
>> at
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
>> at
>> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
>> at java.lang.Thread.run(Thread.java:744)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
>> Caused by: java.lang.IllegalArgumentException:
>> java.net.UnknownHostException: develop
>> at
>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>> at
>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
>> at
>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>> at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>> at
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>> at
>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
>> at
>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>> ... 18 more
>> Caused by: java.net.UnknownHostException: develop
>> ... 33 more
>>
>>
>
>
> --
>
> Jaggu Naidu
>
RE: issue about pig can not know HDFS HA configuration
Posted by Brahma Reddy Battula <br...@huawei.com>.
Hello Jagannath,
Below exception will come when pigclient not able find the hdfs configuraions..you need to do following..
Set the PIG_CLASSPATH environment variable to the location of the cluster configuration directory (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files):
1.
export PIG_CLASSPATH=/mycluster/conf
2. Set the HADOOP_CONF_DIR environment variable to the location of the cluster configuration directory:
export HADOOP_CONF_DIR=/mycluster/conf
Thanks & Regards
Brahma Reddy Battula
________________________________
From: Jagannath Naidu [jagannath.naidu@fosteringlinux.com]
Sent: Wednesday, November 05, 2014 5:11 PM
To: user@hadoop.apache.org
Subject: Re: issue about pig can not know HDFS HA configuration
On 5 November 2014 14:49, ch huang <ju...@gmail.com>> wrote:
hi,maillist:
i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
2014-11-05 14:34:54,710 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
2014-11-05 14:34:54,716 [JobControl] WARN org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.UnknownHostException: develop
unknown host exception, this can be the issue. Check that the host is discoverable either form dns or from hosts.
2014-11-05 14:34:54,717 [JobControl] INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:DefaultJobName got an error while submitting
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.UnknownHostException: develop
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread.java:744)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: develop
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
... 18 more
Caused by: java.net.UnknownHostException: develop
... 33 more
--
Jaggu Naidu
RE: issue about pig can not know HDFS HA configuration
Posted by Brahma Reddy Battula <br...@huawei.com>.
Hello Jagannath,
Below exception will come when pigclient not able find the hdfs configuraions..you need to do following..
Set the PIG_CLASSPATH environment variable to the location of the cluster configuration directory (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files):
1.
export PIG_CLASSPATH=/mycluster/conf
2. Set the HADOOP_CONF_DIR environment variable to the location of the cluster configuration directory:
export HADOOP_CONF_DIR=/mycluster/conf
Thanks & Regards
Brahma Reddy Battula
________________________________
From: Jagannath Naidu [jagannath.naidu@fosteringlinux.com]
Sent: Wednesday, November 05, 2014 5:11 PM
To: user@hadoop.apache.org
Subject: Re: issue about pig can not know HDFS HA configuration
On 5 November 2014 14:49, ch huang <ju...@gmail.com>> wrote:
hi,maillist:
i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
2014-11-05 14:34:54,710 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
2014-11-05 14:34:54,716 [JobControl] WARN org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.UnknownHostException: develop
unknown host exception, this can be the issue. Check that the host is discoverable either form dns or from hosts.
2014-11-05 14:34:54,717 [JobControl] INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:DefaultJobName got an error while submitting
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.UnknownHostException: develop
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread.java:744)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: develop
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
... 18 more
Caused by: java.net.UnknownHostException: develop
... 33 more
--
Jaggu Naidu
RE: issue about pig can not know HDFS HA configuration
Posted by Brahma Reddy Battula <br...@huawei.com>.
Hello Jagannath,
Below exception will come when pigclient not able find the hdfs configuraions..you need to do following..
Set the PIG_CLASSPATH environment variable to the location of the cluster configuration directory (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files):
1.
export PIG_CLASSPATH=/mycluster/conf
2. Set the HADOOP_CONF_DIR environment variable to the location of the cluster configuration directory:
export HADOOP_CONF_DIR=/mycluster/conf
Thanks & Regards
Brahma Reddy Battula
________________________________
From: Jagannath Naidu [jagannath.naidu@fosteringlinux.com]
Sent: Wednesday, November 05, 2014 5:11 PM
To: user@hadoop.apache.org
Subject: Re: issue about pig can not know HDFS HA configuration
On 5 November 2014 14:49, ch huang <ju...@gmail.com>> wrote:
hi,maillist:
i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
2014-11-05 14:34:54,710 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
2014-11-05 14:34:54,716 [JobControl] WARN org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.UnknownHostException: develop
unknown host exception, this can be the issue. Check that the host is discoverable either form dns or from hosts.
2014-11-05 14:34:54,717 [JobControl] INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:DefaultJobName got an error while submitting
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.UnknownHostException: develop
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread.java:744)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: develop
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
... 18 more
Caused by: java.net.UnknownHostException: develop
... 33 more
--
Jaggu Naidu
RE: issue about pig can not know HDFS HA configuration
Posted by Brahma Reddy Battula <br...@huawei.com>.
Hello Jagannath,
Below exception will come when pigclient not able find the hdfs configuraions..you need to do following..
Set the PIG_CLASSPATH environment variable to the location of the cluster configuration directory (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files):
1.
export PIG_CLASSPATH=/mycluster/conf
2. Set the HADOOP_CONF_DIR environment variable to the location of the cluster configuration directory:
export HADOOP_CONF_DIR=/mycluster/conf
Thanks & Regards
Brahma Reddy Battula
________________________________
From: Jagannath Naidu [jagannath.naidu@fosteringlinux.com]
Sent: Wednesday, November 05, 2014 5:11 PM
To: user@hadoop.apache.org
Subject: Re: issue about pig can not know HDFS HA configuration
On 5 November 2014 14:49, ch huang <ju...@gmail.com>> wrote:
hi,maillist:
i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
2014-11-05 14:34:54,710 [JobControl] INFO org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
2014-11-05 14:34:54,716 [JobControl] WARN org.apache.hadoop.security.UserGroupInformation - PriviledgedActionException as:root (auth:SIMPLE) cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.UnknownHostException: develop
unknown host exception, this can be the issue. Check that the host is discoverable either form dns or from hosts.
2014-11-05 14:34:54,717 [JobControl] INFO org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:DefaultJobName got an error while submitting
org.apache.pig.backend.executionengine.ExecException: ERROR 2118: java.net.UnknownHostException: develop
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
at java.lang.Thread.run(Thread.java:744)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: develop
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
at org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
... 18 more
Caused by: java.net.UnknownHostException: develop
... 33 more
--
Jaggu Naidu
Re: issue about pig can not know HDFS HA configuration
Posted by ch huang <ju...@gmail.com>.
this name is not a host name ,it is NN HA service name, behind the name ,is
two NN box, one for active node , one for standby node
On Wed, Nov 5, 2014 at 7:41 PM, Jagannath Naidu <
jagannath.naidu@fosteringlinux.com> wrote:
>
>
> On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
>
>> hi,maillist:
>> i set namenode HA in my HDFS cluster,but seems pig can not know it
>> ,why?
>>
>> 2014-11-05 14:34:54,710 [JobControl] INFO
>> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
>> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
>> 2014-11-05 14:34:54,716 [JobControl] WARN
>> org.apache.hadoop.security.UserGroupInformation -
>> PriviledgedActionException as:root (auth:SIMPLE)
>> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>> java.net.UnknownHostException: develop
>>
>
> unknown host exception, this can be the issue. Check that the host is
> discoverable either form dns or from hosts.
>
>
>> 2014-11-05 14:34:54,717 [JobControl] INFO
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
>> PigLatin:DefaultJobName got an error while submitting
>> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
>> java.net.UnknownHostException: develop
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
>> at
>> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
>> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at javax.security.auth.Subject.doAs(Subject.java:415)
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
>> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
>> at
>> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
>> at
>> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
>> at java.lang.Thread.run(Thread.java:744)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
>> Caused by: java.lang.IllegalArgumentException:
>> java.net.UnknownHostException: develop
>> at
>> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
>> at
>> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
>> at
>> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
>> at
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
>> at
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>> at
>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
>> at
>> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
>> at
>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
>> ... 18 more
>> Caused by: java.net.UnknownHostException: develop
>> ... 33 more
>>
>>
>
>
> --
>
> Jaggu Naidu
>
Re: issue about pig can not know HDFS HA configuration
Posted by Jagannath Naidu <ja...@fosteringlinux.com>.
On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
> hi,maillist:
> i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
>
> 2014-11-05 14:34:54,710 [JobControl] INFO
> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
> 2014-11-05 14:34:54,716 [JobControl] WARN
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root (auth:SIMPLE)
> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> java.net.UnknownHostException: develop
>
unknown host exception, this can be the issue. Check that the host is
discoverable either form dns or from hosts.
> 2014-11-05 14:34:54,717 [JobControl] INFO
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
> PigLatin:DefaultJobName got an error while submitting
> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> java.net.UnknownHostException: develop
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
> at
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> at
> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> at java.lang.Thread.run(Thread.java:744)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
> Caused by: java.lang.IllegalArgumentException:
> java.net.UnknownHostException: develop
> at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at
> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
> at
> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
> ... 18 more
> Caused by: java.net.UnknownHostException: develop
> ... 33 more
>
>
--
Jaggu Naidu
Re: issue about pig can not know HDFS HA configuration
Posted by Jagannath Naidu <ja...@fosteringlinux.com>.
On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
> hi,maillist:
> i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
>
> 2014-11-05 14:34:54,710 [JobControl] INFO
> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
> 2014-11-05 14:34:54,716 [JobControl] WARN
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root (auth:SIMPLE)
> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> java.net.UnknownHostException: develop
>
unknown host exception, this can be the issue. Check that the host is
discoverable either form dns or from hosts.
> 2014-11-05 14:34:54,717 [JobControl] INFO
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
> PigLatin:DefaultJobName got an error while submitting
> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> java.net.UnknownHostException: develop
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
> at
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> at
> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> at java.lang.Thread.run(Thread.java:744)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
> Caused by: java.lang.IllegalArgumentException:
> java.net.UnknownHostException: develop
> at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at
> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
> at
> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
> ... 18 more
> Caused by: java.net.UnknownHostException: develop
> ... 33 more
>
>
--
Jaggu Naidu
Re: issue about pig can not know HDFS HA configuration
Posted by Jagannath Naidu <ja...@fosteringlinux.com>.
On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
> hi,maillist:
> i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
>
> 2014-11-05 14:34:54,710 [JobControl] INFO
> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
> 2014-11-05 14:34:54,716 [JobControl] WARN
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root (auth:SIMPLE)
> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> java.net.UnknownHostException: develop
>
unknown host exception, this can be the issue. Check that the host is
discoverable either form dns or from hosts.
> 2014-11-05 14:34:54,717 [JobControl] INFO
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
> PigLatin:DefaultJobName got an error while submitting
> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> java.net.UnknownHostException: develop
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
> at
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> at
> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> at java.lang.Thread.run(Thread.java:744)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
> Caused by: java.lang.IllegalArgumentException:
> java.net.UnknownHostException: develop
> at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at
> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
> at
> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
> ... 18 more
> Caused by: java.net.UnknownHostException: develop
> ... 33 more
>
>
--
Jaggu Naidu
Re: issue about pig can not know HDFS HA configuration
Posted by Jagannath Naidu <ja...@fosteringlinux.com>.
On 5 November 2014 14:49, ch huang <ju...@gmail.com> wrote:
> hi,maillist:
> i set namenode HA in my HDFS cluster,but seems pig can not know it ,why?
>
> 2014-11-05 14:34:54,710 [JobControl] INFO
> org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area
> file:/tmp/hadoop-root/mapred/staging/root1861403840/.staging/job_local1861403840_0001
> 2014-11-05 14:34:54,716 [JobControl] WARN
> org.apache.hadoop.security.UserGroupInformation -
> PriviledgedActionException as:root (auth:SIMPLE)
> cause:org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> java.net.UnknownHostException: develop
>
unknown host exception, this can be the issue. Check that the host is
discoverable either form dns or from hosts.
> 2014-11-05 14:34:54,717 [JobControl] INFO
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob -
> PigLatin:DefaultJobName got an error while submitting
> org.apache.pig.backend.executionengine.ExecException: ERROR 2118:
> java.net.UnknownHostException: develop
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:288)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:493)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:510)
> at
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
> at
> org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
> at
> org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
> at java.lang.Thread.run(Thread.java:744)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
> Caused by: java.lang.IllegalArgumentException:
> java.net.UnknownHostException: develop
> at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:237)
> at
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
> at
> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.setInputPath(HCatBaseInputFormat.java:326)
> at
> org.apache.hcatalog.mapreduce.HCatBaseInputFormat.getSplits(HCatBaseInputFormat.java:127)
> at
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigInputFormat.getSplits(PigInputFormat.java:274)
> ... 18 more
> Caused by: java.net.UnknownHostException: develop
> ... 33 more
>
>
--
Jaggu Naidu