You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@accumulo.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2017/02/01 18:04:52 UTC

[jira] [Commented] (ACCUMULO-4579) Continous ingest failing due to bad hadoop configuration

    [ https://issues.apache.org/jira/browse/ACCUMULO-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15848718#comment-15848718 ] 

ASF GitHub Bot commented on ACCUMULO-4579:
------------------------------------------

Github user mikewalch commented on a diff in the pull request:

    https://github.com/apache/accumulo-testing/pull/3#discussion_r98955888
  
    --- Diff: core/src/main/java/org/apache/accumulo/testing/core/TestEnv.java ---
    @@ -96,15 +97,22 @@ public String getPid() {
       }
     
       public Configuration getHadoopConfiguration() {
    -    Configuration config = new Configuration();
    -    config.set("mapreduce.framework.name", "yarn");
    -    // Setting below are required due to bundled jar breaking default
    -    // config.
    -    // See
    -    // http://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file
    -    config.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
    -    config.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
    -    return config;
    +    if (hadoopConfig == null) {
    +      String hadoopPrefix = System.getenv("HADOOP_PREFIX");
    +      if (hadoopPrefix == null || hadoopPrefix.isEmpty()) {
    +        throw new IllegalArgumentException("HADOOP_PREFIX must be sent in env");
    +      }
    +      hadoopConfig = new Configuration();
    +      hadoopConfig.addResource(new Path(hadoopPrefix + "/etc/hadoop/core-site.xml"));
    --- End diff --
    
    I pushed another commit 62e91527 where I created new properties in `accumulo-testing.properties` to avoid relying on loading Hadoop config files using `HADOOP_PREFIX`.  I also added `accumulo` to several properties that configured Accumulo scanners.


> Continous ingest failing due to bad hadoop configuration
> --------------------------------------------------------
>
>                 Key: ACCUMULO-4579
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-4579
>             Project: Accumulo
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 2.0.0
>            Reporter: Mike Walch
>            Assignee: Mike Walch
>             Fix For: 2.0.0
>
>
> I ran the continuous ingest test in the accumulo-testing repo on a distributed cluster. The test failed due to Twill storing configuration on the local file system rather than HDFS. This is occurring because the the YarnTwillRunnerService is not being provide with the proper hadoop configuration.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)