You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Alexander Batyrshin <0x...@gmail.com> on 2019/08/15 17:19:36 UTC

Re: java.io.IOException: Added a key not lexically larger than previous

Is is possible that Phoenix is the reason of this problem?

> On 20 Jun 2019, at 04:16, Alexander Batyrshin <0x...@gmail.com> wrote:
> 
> Hello,
> Are there any ideas where this problem comes from and how to fix?
> 
> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,348 WARN  [MemStoreFlusher.0] regionserver.HStore: Failed flushing store file, retrying num=9
> Jun 18 21:38:05 prod022 hbase[148581]: java.io.IOException: Added a key not lexically larger than previous. Current cell = \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231, lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> Jun 18 21:38:05 prod022 hbase[148581]:         at java.lang.Thread.run(Thread.java:748)
> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: ABORTING region server prod022,60020,1560521871613: Replay of WAL required. Forcing server shutdown
> Jun 18 21:38:05 prod022 hbase[148581]: org.apache.hadoop.hbase.DroppedSnapshotException: region: TBL_C,\x0D04606203096428+jaVbx.,1558885224779.b4633aee06956663b05e8322ce34b0a3.
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2675)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
> Jun 18 21:38:05 prod022 hbase[148581]:         at java.lang.Thread.run(Thread.java:748)
> Jun 18 21:38:05 prod022 hbase[148581]: Caused by: java.io.IOException: Added a key not lexically larger than previous. Current cell = \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231, lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
> Jun 18 21:38:05 prod022 hbase[148581]:         ... 9 more
> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.phoenix.coprocessor.ScanRegionObserver...
> 


Re: java.io.IOException: Added a key not lexically larger than previous

Posted by Alexander Batyrshin <0x...@gmail.com>.
Im using global index.
HBase-1.4.10
Phoenix-4.14.2

I constantly get this issues today after increasing write load.

> On 15 Aug 2019, at 21:27, Josh Elser <el...@apache.org> wrote:
> 
> Are you using a local index? Can you share the basics please (HBase and Phoenix versions).
> 
> I'm not seeing if you've shared this previously on this or another thread. Sorry if you have.
> 
> Short-answer, it's possible that something around secondary indexing in Phoenix causes this but not possible to definitively say in a vaccuum.
> 
> On 8/15/19 1:19 PM, Alexander Batyrshin wrote:
>> Is is possible that Phoenix is the reason of this problem?
>>> On 20 Jun 2019, at 04:16, Alexander Batyrshin <0x...@gmail.com> wrote:
>>> 
>>> Hello,
>>> Are there any ideas where this problem comes from and how to fix?
>>> 
>>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,348 WARN  [MemStoreFlusher.0] regionserver.HStore: Failed flushing store file, retrying num=9
>>> Jun 18 21:38:05 prod022 hbase[148581]: java.io.IOException: Added a key not lexically larger than previous. Current cell = \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231, lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at java.lang.Thread.run(Thread.java:748)
>>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: ABORTING region server prod022,60020,1560521871613: Replay of WAL required. Forcing server shutdown
>>> Jun 18 21:38:05 prod022 hbase[148581]: org.apache.hadoop.hbase.DroppedSnapshotException: region: TBL_C,\x0D04606203096428+jaVbx.,1558885224779.b4633aee06956663b05e8322ce34b0a3.
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2675)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at java.lang.Thread.run(Thread.java:748)
>>> Jun 18 21:38:05 prod022 hbase[148581]: Caused by: java.io.IOException: Added a key not lexically larger than previous. Current cell = \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231, lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
>>> Jun 18 21:38:05 prod022 hbase[148581]:         ... 9 more
>>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.phoenix.coprocessor.ScanRegionObserver...
>>> 


Re: java.io.IOException: Added a key not lexically larger than previous

Posted by Alexander Batyrshin <0x...@gmail.com>.
> Since you're using a global index, which stores the index data in a separate table (and hence different regions, each of which has a different MemStore / set of HFiles), and the error's happening to the base table, I'd be surprised if Phoenix indexing is related. 

AFAIK Phoenix handle mutable secondary global index server side via coprocessor on HBase put. So maybe this coprocessor some how brake MemStore consistency for example by releasing locks/etc.


> What coprocessors (Phoenix and otherwise) are loaded on the table?


TABLE_ATTRIBUTES => {
    MAX_FILESIZE => '32212254720',
    coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|',
    coprocessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
    coprocessor$3 => '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|',
    coprocessor$4 => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|',
    coprocessor$5 => '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'
}


Re: java.io.IOException: Added a key not lexically larger than previous

Posted by Geoffrey Jacoby <gj...@salesforce.com>.
Alexander,

I can tell you what's happening but I don't know why.

When you do a Put in HBase (which is what Phoenix UPSERTs do underneath),
it gets committed to a mutable in-memory buffer called the MemStore.
Periodically, the MemStore is flushed to a physical HDFS file called an
HFile. The rule for HFiles is that all of the data inside them is sorted by
row key. That means that the code to flush the memstore to an HFile has a
built in sanity check that makes sure that each Cell flushed to an HBase
HFile is monotonically increasing. That sanity check is failing here for
some reason.

You mention that this happens constantly with increased write load --
that's likely because more write load causes more MemStore flushes.

Since you're using a global index, which stores the index data in a
separate table (and hence different regions, each of which has a different
MemStore / set of HFiles), and the error's happening to the base table, I'd
be surprised if Phoenix indexing is related.

What coprocessors (Phoenix and otherwise) are loaded on the table?

Geoffrey

On Thu, Aug 15, 2019 at 12:49 PM Alexander Batyrshin <0x...@gmail.com>
wrote:

>
> > On 15 Aug 2019, at 21:27, Josh Elser <el...@apache.org> wrote:
> >
> > Short-answer, it's possible that something around secondary indexing in
> Phoenix causes this but not possible to definitively say in a vaccuum.
>
>
> As I see region-server crashes on main table (not index) memstore flush.
> How can I help to provide more information?
> Maybe I should increase logging level for some specific java class of
> Phoenix or HBase?

Re: java.io.IOException: Added a key not lexically larger than previous

Posted by Alexander Batyrshin <0x...@gmail.com>.
> On 15 Aug 2019, at 21:27, Josh Elser <el...@apache.org> wrote:
> 
> Short-answer, it's possible that something around secondary indexing in Phoenix causes this but not possible to definitively say in a vaccuum.


As I see region-server crashes on main table (not index) memstore flush.
How can I help to provide more information?
Maybe I should increase logging level for some specific java class of Phoenix or HBase?

Re: java.io.IOException: Added a key not lexically larger than previous

Posted by Josh Elser <el...@apache.org>.
Are you using a local index? Can you share the basics please (HBase and 
Phoenix versions).

I'm not seeing if you've shared this previously on this or another 
thread. Sorry if you have.

Short-answer, it's possible that something around secondary indexing in 
Phoenix causes this but not possible to definitively say in a vaccuum.

On 8/15/19 1:19 PM, Alexander Batyrshin wrote:
> Is is possible that Phoenix is the reason of this problem?
> 
>> On 20 Jun 2019, at 04:16, Alexander Batyrshin <0x...@gmail.com> wrote:
>>
>> Hello,
>> Are there any ideas where this problem comes from and how to fix?
>>
>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,348 WARN  [MemStoreFlusher.0] regionserver.HStore: Failed flushing store file, retrying num=9
>> Jun 18 21:38:05 prod022 hbase[148581]: java.io.IOException: Added a key not lexically larger than previous. Current cell = \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231, lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at java.lang.Thread.run(Thread.java:748)
>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: ABORTING region server prod022,60020,1560521871613: Replay of WAL required. Forcing server shutdown
>> Jun 18 21:38:05 prod022 hbase[148581]: org.apache.hadoop.hbase.DroppedSnapshotException: region: TBL_C,\x0D04606203096428+jaVbx.,1558885224779.b4633aee06956663b05e8322ce34b0a3.
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2675)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2352)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:2314)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:2200)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.flush(HRegion.java:2125)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:512)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:482)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$900(MemStoreFlusher.java:76)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:264)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at java.lang.Thread.run(Thread.java:748)
>> Jun 18 21:38:05 prod022 hbase[148581]: Caused by: java.io.IOException: Added a key not lexically larger than previous. Current cell = \x0D100395583733fW+,WQ/d:p/1560882798036/DeleteColumn/vlen=0/seqid=30023231, lastCell = \x0D100395583733fW+,WQ/d:p/1560882798036/Put/vlen=29/seqid=30023591
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:204)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:279)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.io.hfile.HFileWriterV3.append(HFileWriterV3.java:87)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1053)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.StoreFlusher.performFlush(StoreFlusher.java:139)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:75)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:969)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2484)
>> Jun 18 21:38:05 prod022 hbase[148581]:         at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushCacheAndCommit(HRegion.java:2622)
>> Jun 18 21:38:05 prod022 hbase[148581]:         ... 9 more
>> Jun 18 21:38:05 prod022 hbase[148581]: 2019-06-18 21:38:05,373 FATAL [MemStoreFlusher.0] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: [org.apache.phoenix.coprocessor.ScanRegionObserver...
>>
>