You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ambari.apache.org by "Vitaly Brodetskyi (JIRA)" <ji...@apache.org> on 2016/05/11 15:25:13 UTC

[jira] [Updated] (AMBARI-16453) [Move Master][History Server] Service check fails after moving HistoryServer to another host

     [ https://issues.apache.org/jira/browse/AMBARI-16453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Vitaly Brodetskyi updated AMBARI-16453:
---------------------------------------
    Status: Patch Available  (was: Open)

> [Move Master][History Server] Service check fails after moving HistoryServer to another host
> --------------------------------------------------------------------------------------------
>
>                 Key: AMBARI-16453
>                 URL: https://issues.apache.org/jira/browse/AMBARI-16453
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.4.0
>            Reporter: Vitaly Brodetskyi
>            Assignee: Vitaly Brodetskyi
>            Priority: Critical
>             Fix For: 2.4.0
>
>         Attachments: AMBARI-16453.patch
>
>
> {code}
> 16/05/06 07:06:23 ERROR lzo.GPLNativeCodeLoader: Could not load native gpl library
> java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
> 	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867)
> 	at java.lang.Runtime.loadLibrary0(Runtime.java:870)
> 	at java.lang.System.loadLibrary(System.java:1122)
> 	at com.hadoop.compression.lzo.GPLNativeCodeLoader.<clinit>(GPLNativeCodeLoader.java:32)
> 	at com.hadoop.compression.lzo.LzoCodec.<clinit>(LzoCodec.java:71)
> 	at java.lang.Class.forName0(Native Method)
> 	at java.lang.Class.forName(Class.java:348)
> 	at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2147)
> 	at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2112)
> 	at org.apache.hadoop.io.compress.CompressionCodecFactory.getCodecClasses(CompressionCodecFactory.java:132)
> 	at org.apache.hadoop.io.compress.CompressionCodecFactory.<init>(CompressionCodecFactory.java:179)
> 	at org.apache.hadoop.mapreduce.lib.input.TextInputFormat.isSplitable(TextInputFormat.java:58)
> 	at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:399)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:422)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1719)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
> 	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
> 	at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> 	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> 	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> 	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> 16/05/06 07:06:23 ERROR lzo.LzoCodec: Cannot load native-lzo without native-hadoop
> 16/05/06 07:06:23 INFO mapreduce.JobSubmitter: number of splits:1
> 16/05/06 07:06:23 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1462518135627_0001
> 16/05/06 07:06:24 INFO impl.YarnClientImpl: Submitted application application_1462518135627_0001
> 16/05/06 07:06:24 INFO mapreduce.Job: The url to track the job: http://ambarirmps-5.openstacklocal:8088/proxy/application_1462518135627_0001/
> 16/05/06 07:06:24 INFO mapreduce.Job: Running job: job_1462518135627_0001
> 16/05/06 07:06:35 INFO mapreduce.Job: Job job_1462518135627_0001 running in uber mode : false
> 16/05/06 07:06:35 INFO mapreduce.Job:  map 0% reduce 0%
> 16/05/06 07:06:42 INFO mapreduce.Job:  map 100% reduce 0%
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)