You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Nick Dimiduk (Jira)" <ji...@apache.org> on 2020/03/17 00:14:00 UTC

[jira] [Created] (HBASE-23999) [flakey test] TestTableOutputFormatConnectionExhaust

Nick Dimiduk created HBASE-23999:
------------------------------------

             Summary: [flakey test] TestTableOutputFormatConnectionExhaust
                 Key: HBASE-23999
                 URL: https://issues.apache.org/jira/browse/HBASE-23999
             Project: HBase
          Issue Type: Test
          Components: test
    Affects Versions: 2.3.0
            Reporter: Nick Dimiduk


Hit this during master startup sequence in the test.

{noformat}
2020-03-16 23:40:37,298 ERROR [StoreOpener-1588230740-1] conf.Configuration(2980): error parsing conf hbase-site.xml
com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog
 at [row,col,system-id]: [1,0,"file:/home/vagrant/repos/hbase/hbase-mapreduce/target/test-classes/hbase-site.xml"]
        at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687)
        at com.ctc.wstx.sr.BasicStreamReader.handleEOF(BasicStreamReader.java:2220)
        at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:2126)
        at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1181)
        at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3277)
        at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3071)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2964)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2930)
        at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2805)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1199)
        at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1253)
        at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1659)
        at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:70)
        at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:84)
        at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:98)
        at org.apache.hadoop.hbase.io.crypto.Context.<init>(Context.java:44)
        at org.apache.hadoop.hbase.io.crypto.Encryption$Context.<init>(Encryption.java:64)
        at org.apache.hadoop.hbase.io.crypto.Encryption$Context.<clinit>(Encryption.java:61)
        at org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:228)
        at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5890)
        at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1096)
        at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1093)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:834)
2020-03-16 23:40:37,301 ERROR [master/bionic:0:becomeActiveMaster] regionserver.HRegion(1137): Could not initialize all stores for the region=hbase:meta,,1.1588230740
{noformat}

Looking at the file under {{target/test-classes}}, it looks like this is a file written by YARN.

{noformat}
<?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
<property><name>yarn.log-aggregation.file-formats</name><value>TFile</value><final>false</final><source>yarn-default.xml</source></property>
<property><name>hbase.master.mob.ttl.cleaner.period</name><value>86400</value><final>false</final><source>hbase-default.xml</source></property>
<property><name>dfs.namenode.resource.check.interval</name><value>5000</value><final>false</final><source>hdfs-default.xml</source></property>
<property><name>mapreduce.jobhistory.client.thread-count</name><value>10</value><final>false</final><source>mapred-default.xml</source></property>
...
{noformat}

My guess is that we have something in the MR framework unconfigured, it's writing these temporary job files to some default (like the first class path location or something??) and parallel test runs are stomping on each other.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)