You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "stack (JIRA)" <ji...@apache.org> on 2018/02/21 06:03:02 UTC

[jira] [Created] (HBASE-20036) TestAvoidCellReferencesIntoShippedBlocks timed out

stack created HBASE-20036:
-----------------------------

             Summary: TestAvoidCellReferencesIntoShippedBlocks timed out
                 Key: HBASE-20036
                 URL: https://issues.apache.org/jira/browse/HBASE-20036
             Project: HBase
          Issue Type: Sub-task
            Reporter: stack


Looks like it is stuck can't flush; bad math?

See the dashboard where it hung here: https://builds.apache.org/job/HBASE-Flaky-Tests-branch2.0/2428/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.client.TestAvoidCellReferencesIntoShippedBlocks-output.txt



... the provocation could be this?

2018-02-21 04:18:44,973 DEBUG [Thread-178] bucket.BucketCache(629): This block eefaa9a7b10e437b9fc2b55a67d63191_4356 is still referred by 1 readers. Can not be freed now. Hence will mark this for evicting at a later point
Exception in thread "Thread-178" java.lang.AssertionError: old blocks should still be found  expected:<6> but was:<5>
	at org.junit.Assert.fail(Assert.java:88)
	at org.junit.Assert.failNotEquals(Assert.java:834)
	at org.junit.Assert.assertEquals(Assert.java:645)

... Then we get stuck doing this:

2018-02-21 04:23:34,661 DEBUG [master/asf903:0.Chore.1] master.HMaster(1524): Skipping normalization for table: testHBase16372InCompactionWritePath, as it's either system table or doesn't have auto normalization turned on
2018-02-21 04:23:35,695 INFO  [regionserver/asf903:0.Chore.1] regionserver.HRegionServer$PeriodicMemStoreFlusher(1752): MemstoreFlusherChore requesting flush of hbase:meta,,1.1588230740 because info has an old edit so flush to free WALs after random delay 286820ms
2018-02-21 04:23:36,009 DEBUG [ReadOnlyZKClient-localhost:61855@0x227ea3ed] zookeeper.ReadOnlyZKClient(316): 0x227ea3ed to localhost:61855 inactive for 60000ms; closing (Will reconnect when new requests)

It also failed a recent nightly for same reason:

https://builds.apache.org/job/HBase%20Nightly/job/branch-2/355/

Any chance you'd take a look [~ram_krish]? You best at this stuff?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)