You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "stack (JIRA)" <ji...@apache.org> on 2007/10/20 02:28:50 UTC

[jira] Created: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

[hbase] TestTableIndex failed in patch build #970 and #956
----------------------------------------------------------

                 Key: HADOOP-2083
                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
             Project: Hadoop
          Issue Type: Bug
          Components: contrib/hbase
            Reporter: stack


TestTableIndex failed in two nightly builds.

The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.

Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "stack (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12538988 ] 

stack commented on HADOOP-2083:
-------------------------------

Console is here Ning: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/970/console.  Search for when TestTableIndex runs.  Below is the pertinent extract (I believe).  Test #956 looks to have had a different failure cause that looks like it may have since been fixed.

FYI, this failure seems to be rare.  I've been trying to keep an eye out and over 70-odd builds have happened since w/o recurrence.

{code}
    [junit] java.lang.IllegalStateException: Variable substitution depth too large: 20 <?xml version="1.0" encoding="UTF-8"?><configuration>
    [junit] <property><name>dfs.namenode.logging.level</name><value>info</value></property>
    [junit] <property><name>tasktracker.http.port</name><value>50060</value></property>
    [junit] <property><name>dfs.name.dir</name><value>${hadoop.tmp.dir}/dfs/name</value></property>
    [junit] <property><name>mapred.job.tracker.handler.count</name><value>10</value></property>
    [junit] <property><name>mapred.output.compression.type</name><value>RECORD</value></property>
    [junit] <property><name>dfs.datanode.dns.interface</name><value>default</value></property>
    [junit] <property><name>mapred.submit.replication</name><value>10</value></property>
    [junit] <property><name>fs.file.impl</name><value>org.apache.hadoop.fs.LocalFileSystem</value></property>
    [junit] <property><name>fs.ramfs.impl</name><value>org.apache.hadoop.fs.InMemoryFileSystem</value></property>
    [junit] <property><name>fs.hftp.impl</name><value>org.apache.hadoop.dfs.HftpFileSystem</value></property>
    [junit] <property><name>mapred.child.java.opts</name><value>-Xmx200m</value></property>
    [junit] <property><name>dfs.datanode.du.pct</name><value>0.98f</value></property>
    [junit] <property><name>mapred.max.tracker.failures</name><value>4</value></property>
    [junit] <property><name>map.sort.class</name><value>org.apache.hadoop.mapred.MergeSorter</value></property>
    [junit] <property><name>ipc.client.timeout</name><value>60000</value></property>
    [junit] <property><name>dfs.datanode.du.reserved</name><value>0</value></property>
    [junit] <property><name>mapred.tasktracker.tasks.maximum</name><value>2</value></property>
    [junit] <property><name>hbase.index.merge.factor</name><value>10</value></property>
    [junit] <property><name>fs.inmemory.size.mb</name><value>75</value></property>
    [junit] <property><name>mapred.compress.map.output</name><value>false</value></property>
    [junit] <property><name>tasktracker.http.bindAddress</name><value>0.0.0.0</value></property>
    [junit] <property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value></property>
    [junit] <property><name>keep.failed.task.files</name><value>false</value></property>
    [junit] <property><name>mapred.map.output.compression.type</name><value>RECORD</value></property>
    [junit] <property><name>io.seqfile.lazydecompress</name><value>true</value></property>
    [junit] <property><name>io.skip.checksum.errors</name><value>false</value></property>
    [junit] <property><name>mapred.job.tracker.info.port</name><value>50030</value></property>
    [junit] <property><name>fs.s3.block.size</name><value>67108864</value></property>
    [junit] <property><name>dfs.client.block.write.retries</name><value>3</value></property>
    [junit] <property><name>dfs.replication.min</name><value>1</value></property>
    [junit] <property><name>mapred.userlog.limit.kb</name><value>0</value></property>
    [junit] <property><name>io.bytes.per.checksum</name><value>512</value></property>
    [junit] <property><name>fs.s3.maxRetries</name><value>4</value></property>
    [junit] <property><name>io.map.index.skip</name><value>0</value></property>
    [junit] <property><name>dfs.safemode.extension</name><value>30000</value></property>
    [junit] <property><name>hbase.index.optimize</name><value>true</value></property>
    [junit] <property><name>mapred.jobtracker.completeuserjobs.maximum</name><value>100</value></property>
    [junit] <property><name>mapred.system.dir</name><value>build/contrib/${contrib.name}/test/system</value></property>
    [junit] <property><name>mapred.userlog.retain.hours</name><value>24</value></property>
    [junit] <property><name>mapred.tasktracker.expiry.interval</name><value>600000</value></property>
    [junit] <property><name>mapred.log.dir</name><value>${hadoop.tmp.dir}/mapred/logs</value></property>
    [junit] <property><name>job.end.retry.interval</name><value>30000</value></property>
    [junit] <property><name>mapred.task.tracker.report.bindAddress</name><value>127.0.0.1</value></property>
    [junit] <property><name>local.cache.size</name><value>10737418240</value></property>
    [junit] <property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec</value></property>
    [junit] <property><name>dfs.df.interval</name><value>60000</value></property>
    [junit] <property><name>dfs.replication.considerLoad</name><value>true</value></property>
    [junit] <property><name>fs.checkpoint.period</name><value>3600</value></property>
    [junit] <property><name>dfs.info.bindAddress</name><value>0.0.0.0</value></property>
    [junit] <property><name>jobclient.output.filter</name><value>FAILED</value></property>
    [junit] <property><name>mapred.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
    [junit] <property><name>ipc.client.connect.max.retries</name><value>10</value></property>
    [junit] <property><name>tasktracker.http.threads</name><value>40</value></property>
    [junit] <property><name>io.file.buffer.size</name><value>4096</value></property>
    [junit] <property><name>ipc.client.kill.max</name><value>10</value></property>
    [junit] <property><name>io.sort.mb</name><value>100</value></property>
    [junit] <property><name>mapred.tasktracker.dns.interface</name><value>default</value></property>
    [junit] <property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value></property>
    [junit] <property><name>mapred.min.split.size</name><value>0</value></property>
    [junit] <property><name>mapred.map.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
    [junit] <property><name>fs.checkpoint.dir</name><value>${hadoop.tmp.dir}/dfs/namesecondary</value></property>
    [junit] <property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value></property>
    [junit] <property><name>fs.default.name</name><value>file:///</value></property>
    [junit] <property><name>ipc.client.maxidletime</name><value>120000</value></property>
    [junit] <property><name>dfs.secondary.info.bindAddress</name><value>0.0.0.0</value></property>
    [junit] <property><name>hbase.index.use.compound.file</name><value>true</value></property>
    [junit] <property><name>io.seqfile.compression.type</name><value>RECORD</value></property>
    [junit] <property><name>hadoop.native.lib</name><value>true</value></property>
    [junit] <property><name>mapred.local.dir.minspacestart</name><value>0</value></property>
    [junit] <property><name>hadoop.tmp.dir</name><value>${build.test}</value></property>
    [junit] <property><name>dfs.datanode.bindAddress</name><value>0.0.0.0</value></property>
    [junit] <property><name>mapred.map.tasks</name><value>2</value></property>
    [junit] <property><name>dfs.heartbeat.interval</name><value>3</value></property>
    [junit] <property><name>webinterface.private.actions</name><value>false</value></property>
    [junit] <property><name>mapred.reduce.parallel.copies</name><value>5</value></property>
    [junit] <property><name>mapred.local.dir</name><value>${hadoop.tmp.dir}/mapred/local</value></property>
    [junit] <property><name>hbase.index.max.field.length</name><value>10000</value></property>
    [junit] <property><name>dfs.datanode.dns.nameserver</name><value>default</value></property>
    [junit] <property><name>mapred.inmem.merge.threshold</name><value>1000</value></property>
    [junit] <property><name>mapred.speculative.execution</name><value>true</value></property>
    [junit] <property><name>mapred.tasktracker.dns.nameserver</name><value>default</value></property>
    [junit] <property><name>dfs.datanode.port</name><value>50010</value></property>
    [junit] <property><name>fs.trash.interval</name><value>0</value></property>
    [junit] <property><name>hbase.index.max.buffered.docs</name><value>500</value></property>
    [junit] <property><name>dfs.replication.max</name><value>512</value></property>
    [junit] <property><name>dfs.blockreport.intervalMsec</name><value>3600000</value></property>
    [junit] <property><name>dfs.block.size</name><value>67108864</value></property>
    [junit] <property><name>mapred.task.timeout</name><value>600000</value></property>
    [junit] <property><name>ipc.client.connection.maxidletime</name><value>1000</value></property>
    [junit] <property><name>fs.s3.sleepTimeSeconds</name><value>10</value></property>
    [junit] <property><name>dfs.client.buffer.dir</name><value>${hadoop.tmp.dir}/dfs/tmp</value></property>
    [junit] <property><name>mapred.output.compress</name><value>false</value></property>
    [junit] <property><name>mapred.local.dir.minspacekill</name><value>0</value></property>
    [junit] <property><name>dfs.replication</name><value>3</value></property>
    [junit] <property><name>mapred.reduce.max.attempts</name><value>4</value></property>
    [junit] <property><name>dfs.default.chunk.view.size</name><value>32768</value></property>
    [junit] <property><name>dfs.secondary.info.port</name><value>50090</value></property>
    [junit] <property><name>hadoop.logfile.count</name><value>10</value></property>
    [junit] <property><name>ipc.client.idlethreshold</name><value>4000</value></property>
    [junit] <property><name>mapred.job.tracker</name><value>local</value></property>
    [junit] <property><name>hadoop.logfile.size</name><value>10000000</value></property>
    [junit] <property><name>fs.checkpoint.size</name><value>67108864</value></property>
    [junit] <property><name>io.sort.factor</name><value>10</value></property>
    [junit] <property><name>dfs.info.port</name><value>50070</value></property>
    [junit] <property><name>mapred.temp.dir</name><value>${hadoop.tmp.dir}/mapred/temp</value></property>
    [junit] <property><name>job.end.retry.attempts</name><value>0</value></property>
    [junit] <property><name>dfs.data.dir</name><value>${hadoop.tmp.dir}/dfs/data</value></property>
    [junit] <property><name>mapred.reduce.tasks</name><value>1</value></property>
    [junit] <property><name>fs.s3.impl</name><value>org.apache.hadoop.fs.s3.S3FileSystem</value></property>
    [junit] <property><name>fs.trash.root</name><value>${hadoop.tmp.dir}/Trash</value></property>
    [junit] <property><name>dfs.namenode.handler.count</name><value>10</value></property>
    [junit] <property><name>io.seqfile.compress.blocksize</name><value>1000000</value></property>
    [junit] <property><name>fs.kfs.impl</name><value>org.apache.hadoop.fs.kfs.KosmosFileSystem</value></property>
    [junit] <property><name>ipc.server.listen.queue.size</name><value>128</value></property>
    [junit] <property><name>fs.hdfs.impl</name><value>org.apache.hadoop.dfs.DistributedFileSystem</value></property>
    [junit] <property><name>mapred.job.tracker.info.bindAddress</name><value>0.0.0.0</value></property>
    [junit] <property><name>hbase.index.rowkey.name</name><value>key</value></property>
    [junit] <property><name>dfs.safemode.threshold.pct</name><value>0.999f</value></property>
    [junit] <property><name>mapred.map.max.attempts</name><value>4</value></property>
    [junit] <column>
    [junit] <property><name>hbase.column.boost</name><value>3</value></property>
    [junit] <property><name>hbase.column.tokenize</name><value>false</value></property>
    [junit] <property><name>hbase.column.name</name><value>contents:</value></property>
    [junit] <property><name>hbase.column.store</name><value>true</value></property>
    [junit] <property><name>hbase.column.omit.norms</name><value>false</value></property>
    [junit] <property><name>hbase.column.index</name><value>true</value></property>
    [junit] </column></configuration>
    [junit] 	at org.apache.hadoop.conf.Configuration.substituteVars(Configuration.java:293)
    [junit] 	at org.apache.hadoop.conf.Configuration.get(Configuration.java:300)
    [junit] 	at org.apache.hadoop.hbase.mapred.IndexTableReduce.configure(IndexTableReduce.java:53)
    [junit] 	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:58)
    [junit] 	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:82)
    [junit] 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:243)
    [junit] 	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:164)
    [junit] 2007-10-19 08:29:23,346 ERROR [expireTrackers] org.apache.hadoop.mapred.JobTracker$ExpireTrackers.run(JobTracker.java:308): Tracker Expiry Thread got exception: java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.mapred.JobTracker$ExpireTrackers.run(JobTracker.java:263)
    [junit] 	
{code}

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HADOOP-2083:
--------------------------

    Fix Version/s: 0.16.0
           Status: Patch Available  (was: Open)

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>             Fix For: 0.16.0
>
>         Attachments: mrt-v2.patch, mrt.patch
>
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HADOOP-2083:
--------------------------

    Resolution: Fixed
        Status: Resolved  (was: Patch Available)

Committed patch that fixes most common cause of TestTableIndex failure.  Moved the blowing of interpolation limit over to its own issue:  	 HADOOP-2136.  Resolving.

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>             Fix For: 0.16.0
>
>         Attachments: mrt-v2.patch, mrt.patch
>
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "Ning Li (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12538962 ] 

Ning Li commented on HADOOP-2083:
---------------------------------

Could you please post the error message?

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HADOOP-2083:
--------------------------

    Attachment: mrt.patch

Find the parent region BEFORE we start loading the table with content.  This way I'm sure to have the correct parent needed later when making assertions and trying to find names of daughter regions.

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>         Attachments: mrt.patch
>
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "stack (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12539197 ] 

stack commented on HADOOP-2083:
-------------------------------

#1041 failure in TestTableIndex was because an assertion that the table was multiregion failed.  Was also cause of the other test failure, in TestTableMapReduce in patch build #1040.  The code that finds the parent is flawed in that it doesn't allow that the parent region may have already split at time we go to provoke split.  Same issue in #1038 failure.

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Updated: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "stack (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

stack updated HADOOP-2083:
--------------------------

    Attachment: mrt-v2.patch

v2 passes all tests locally

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>         Attachments: mrt-v2.patch, mrt.patch
>
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "Hadoop QA (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12539297 ] 

Hadoop QA commented on HADOOP-2083:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
http://issues.apache.org/jira/secure/attachment/12368803/mrt-v2.patch
against trunk revision r590875.

    @author +1.  The patch does not contain any @author tags.

    javadoc +1.  The javadoc tool did not generate any warning messages.

    javac +1.  The applied patch does not generate any new compiler warnings.

    findbugs +1.  The patch does not introduce any new Findbugs warnings.

    core tests +1.  The patch passed core unit tests.

    contrib tests +1.  The patch passed contrib unit tests.

Test results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1047/testReport/
Findbugs warnings: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1047/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1047/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/1047/console

This message is automatically generated.

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>             Fix For: 0.16.0
>
>         Attachments: mrt-v2.patch, mrt.patch
>
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (HADOOP-2083) [hbase] TestTableIndex failed in patch build #970 and #956

Posted by "Hudson (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12539667 ] 

Hudson commented on HADOOP-2083:
--------------------------------

Integrated in Hadoop-Nightly #290 (See [http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Nightly/290/])

> [hbase] TestTableIndex failed in patch build #970 and #956
> ----------------------------------------------------------
>
>                 Key: HADOOP-2083
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2083
>             Project: Hadoop
>          Issue Type: Bug
>          Components: contrib/hbase
>            Reporter: stack
>             Fix For: 0.16.0
>
>         Attachments: mrt-v2.patch, mrt.patch
>
>
> TestTableIndex failed in two nightly builds.
> The fancy trick of passing around a complete configuration with per column indexing specification extensions inside in a Configuration value is biting us.  The interpolation code has an upper bound of 20 interpolations.
> Looking at seeing if I can run the interpolations before inserting the config.  else need to move to have Configuration.substituteVars made protected so can fix ... or do config. for this job in another way.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.