You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2015/01/10 03:43:34 UTC

[jira] [Commented] (AMBARI-8917) Rolling Upgrade - prepare function to copy tarballs based on new HDP version

    [ https://issues.apache.org/jira/browse/AMBARI-8917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14272283#comment-14272283 ] 

Hadoop QA commented on AMBARI-8917:
-----------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12691456/AMBARI-8917.patch
  against trunk revision .

    {color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/Ambari-trunk-test-patch/1273//console

This message is automatically generated.

> Rolling Upgrade - prepare function to copy tarballs based on new HDP version
> ----------------------------------------------------------------------------
>
>                 Key: AMBARI-8917
>                 URL: https://issues.apache.org/jira/browse/AMBARI-8917
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.0.0
>            Reporter: Dmitry Lysnichenko
>            Assignee: Alejandro Fernandez
>             Fix For: 2.0.0
>
>         Attachments: AMBARI-8917.patch
>
>
> The prepare_rolling_restart() functions call copy_tarballs_to_hdfs(), which gets the HDP version from the first component that matches the regex in the output of hdp-select, which is incorrect.
> Instead of getting the first component, it should query a specific component, e.g., "hdp-select status hiveserver2"
> 14/12/23 23:00:27 INFO client.RMProxy: Connecting to ResourceManager at 11b.vm/10.77.65.27:8050
> java.io.FileNotFoundException: File does not exist: hdfs://mycluster/hdp/apps/2.2.1.0-2154/mapreduce/mapreduce.tar.gz
> 	at org.apache.hadoop.fs.Hdfs.getFileStatus(Hdfs.java:137)
> 	at org.apache.hadoop.fs.AbstractFileSystem.resolvePath(AbstractFileSystem.java:460)
> 	at org.apache.hadoop.fs.FileContext$24.next(FileContext.java:2137)
> 	at org.apache.hadoop.fs.FileContext$24.next(FileContext.java:2133)
> 	at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> 	at org.apache.hadoop.fs.FileContext.resolve(FileContext.java:2133)
> 	at org.apache.hadoop.fs.FileContext.resolvePath(FileContext.java:595)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.addMRFrameworkToDistributedCache(JobSubmitter.java:753)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:435)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1293)
> 	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1314)
> 	at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
> 	at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
> 	at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> 	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> 2014-12-23 23:00:28,088 - Error while executing command 'service_check':
> Traceback (most recent call last):



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)