You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Stack <st...@duboce.net> on 2014/12/01 00:52:38 UTC

What is our current understanding regards state of integration tests?

I just tried running all tests and notice that at least
IntegrationTestIngestWithEcnryption fails reliably with the below. Is this
expected? Does this test require a particular context setup first to run,
one that is not present normally? Once it has run, I cannot get past it.
If I restart servers, they fail on log replay with same exception. Is this
expected?

On IntegrationTests in general, I was thinking they should generally pass
and that it is a bug if they do not.  Is that what others think?

Has anyone been running IT tests regularly? If so, what has been your
experience? Or if you have been running it tests, do you run individual
tests?

Thanks. Just trying to figure what current understanding of their state is
before digging in.
Yours,
St.Ack

2014-11-29 21:19:06,447 DEBUG
[B.defaultRpcServer.handler=14,queue=2,port=16020] regionserver.HRegion:
Flush requested on
IntegrationTestIngestWithEncryption,cccccccc,1417324570806.325074071473eb28b662f7d694e8c609.
2014-11-29 21:19:06,640 FATAL [MemStoreFlusher.1]
regionserver.HRegionServer: ABORTING region server
c2021.halxg.cloudera.com,16020,1417323325816:
Replay of HLog required. Forcing server shutdown
org.apache.hadoop.hbase.DroppedSnapshotException: region:
IntegrationTestIngestWithEncryption,,1417324570806.55e317f49ddf8ed35323d9b675611548.
        at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1964)
        at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1730)
        at
org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1662)
        at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:434)
        at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
        at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
        at
org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.lang.RuntimeException:
KeyProvider scheme should specify KeyStore type
        at
org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:535)
        at
org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:425)
        at
org.apache.hadoop.hbase.io.crypto.Encryption.encryptWithSubjectKey(Encryption.java:449)
        at
org.apache.hadoop.hbase.security.EncryptionUtil.wrapKey(EncryptionUtil.java:92)
        at
org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:127)
        at
org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:366)
        at
org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:986)
        at
org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:67)
        at
org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:80)
        at
org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:871)
        at
org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2080)
        at
org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1918)
        ... 7 more
Caused by: java.lang.RuntimeException: KeyProvider scheme should specify
KeyStore type
        at
org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:142)
        at
org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:528)
        ... 18 more

Re: What is our current understanding regards state of integration tests?

Posted by Michael Segel <mi...@hotmail.com>.
Can you do a check prior to launching the encryption test to see if the cluster has encryption enabled? 

If not, skip the test? 

On Dec 1, 2014, at 3:12 PM, Andrew Purtell <an...@gmail.com> wrote:

>> My cluster won't start now after this IT test runs.  That is to be expected?
> 
> Yes, but it is ugly I agree. We can check the deploy-ability of encryption like we do with compression. Let me open an issue for doing that. 
> 
> 
>> On Dec 1, 2014, at 2:29 AM, Stack <st...@duboce.net> wrote:
>> 
>>> On Sun, Nov 30, 2014 at 7:54 PM, Andrew Purtell <ap...@apache.org> wrote:
>>> 
>>> Yes, you have to set up cluster configuration for encryption or the IT
>>> won't work.
>> 
>> 
>> Ok. Thanks. I'll add note to the IT section that wildcard will run at least
>> this test that requires encryption setup.
>> 
>> My cluster won't start now after this IT test runs.  That is to be expected?
>> 
>> 
>> 
>> 
>>> Encryption requires creating a keystore, etc.  If you run the
>>> test as all-localhost, using mvn verify, it should pass, because all
>>> configuration can be programatically set up and deployed since the
>>> environment is a mini cluster. You can see how the test sets up the
>>> necessary configuration using HTU for that case.
>> Thanks. Will add note on above and pointers to the security section of the
>> refguide (as per your note after this one).
>> 
>> St.Ack
>> 
>> 
>> 
>>>> On Sun, Nov 30, 2014 at 6:52 PM, Stack <st...@duboce.net> wrote:
>>>> 
>>>> I just tried running all tests and notice that at least
>>>> IntegrationTestIngestWithEcnryption fails reliably with the below. Is
>>> this
>>>> expected? Does this test require a particular context setup first to run,
>>>> one that is not present normally? Once it has run, I cannot get past it.
>>>> If I restart servers, they fail on log replay with same exception. Is
>>> this
>>>> expected?
>>>> 
>>>> On IntegrationTests in general, I was thinking they should generally pass
>>>> and that it is a bug if they do not.  Is that what others think?
>>>> 
>>>> Has anyone been running IT tests regularly? If so, what has been your
>>>> experience? Or if you have been running it tests, do you run individual
>>>> tests?
>>>> 
>>>> Thanks. Just trying to figure what current understanding of their state
>>> is
>>>> before digging in.
>>>> Yours,
>>>> St.Ack
>>>> 
>>>> 2014-11-29 21:19:06,447 DEBUG
>>>> [B.defaultRpcServer.handler=14,queue=2,port=16020] regionserver.HRegion:
>>>> Flush requested on
>>> IntegrationTestIngestWithEncryption,cccccccc,1417324570806.325074071473eb28b662f7d694e8c609.
>>>> 2014-11-29 21:19:06,640 FATAL [MemStoreFlusher.1]
>>>> regionserver.HRegionServer: ABORTING region server
>>>> c2021.halxg.cloudera.com,16020,1417323325816:
>>>> Replay of HLog required. Forcing server shutdown
>>>> org.apache.hadoop.hbase.DroppedSnapshotException: region:
>>> IntegrationTestIngestWithEncryption,,1417324570806.55e317f49ddf8ed35323d9b675611548.
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1964)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1730)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1662)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:434)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
>>>>       at java.lang.Thread.run(Thread.java:745)
>>>> Caused by: java.lang.RuntimeException: java.lang.RuntimeException:
>>>> KeyProvider scheme should specify KeyStore type
>>>>       at
>>> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:535)
>>>>       at
>>> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:425)
>>>>       at
>>> org.apache.hadoop.hbase.io.crypto.Encryption.encryptWithSubjectKey(Encryption.java:449)
>>>>       at
>>> org.apache.hadoop.hbase.security.EncryptionUtil.wrapKey(EncryptionUtil.java:92)
>>>>       at
>>> org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:127)
>>>>       at
>>> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:366)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:986)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:67)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:80)
>>>>       at
>>>> org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:871)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2080)
>>>>       at
>>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1918)
>>>>       ... 7 more
>>>> Caused by: java.lang.RuntimeException: KeyProvider scheme should specify
>>>> KeyStore type
>>>>       at
>>> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:142)
>>>>       at
>>> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:528)
>>>>       ... 18 more
>>> 
>>> 
>>> 
>>> --
>>> Best regards,
>>> 
>>>  - Andy
>>> 
>>> Problems worthy of attack prove their worth by hitting back. - Piet Hein
>>> (via Tom White)
>>> 
> 


Re: What is our current understanding regards state of integration tests?

Posted by Stack <st...@duboce.net>.
On Mon, Dec 1, 2014 at 7:12 AM, Andrew Purtell <an...@gmail.com>
wrote:

> > My cluster won't start now after this IT test runs.  That is to be
> expected?
>
> Yes, but it is ugly I agree. We can check the deploy-ability of encryption
> like we do with compression. Let me open an issue for doing that.
>


Thanks Andrew. Let me try and work on it. As per Michael, test should just
complain it is missing its config and we move on to next test.
St.Ack

Re: What is our current understanding regards state of integration tests?

Posted by Andrew Purtell <an...@gmail.com>.
> My cluster won't start now after this IT test runs.  That is to be expected?

Yes, but it is ugly I agree. We can check the deploy-ability of encryption like we do with compression. Let me open an issue for doing that. 


> On Dec 1, 2014, at 2:29 AM, Stack <st...@duboce.net> wrote:
> 
>> On Sun, Nov 30, 2014 at 7:54 PM, Andrew Purtell <ap...@apache.org> wrote:
>> 
>> Yes, you have to set up cluster configuration for encryption or the IT
>> won't work.
> 
> 
> Ok. Thanks. I'll add note to the IT section that wildcard will run at least
> this test that requires encryption setup.
> 
> My cluster won't start now after this IT test runs.  That is to be expected?
> 
> 
> 
> 
>> Encryption requires creating a keystore, etc.  If you run the
>> test as all-localhost, using mvn verify, it should pass, because all
>> configuration can be programatically set up and deployed since the
>> environment is a mini cluster. You can see how the test sets up the
>> necessary configuration using HTU for that case.
> Thanks. Will add note on above and pointers to the security section of the
> refguide (as per your note after this one).
> 
> St.Ack
> 
> 
> 
>>> On Sun, Nov 30, 2014 at 6:52 PM, Stack <st...@duboce.net> wrote:
>>> 
>>> I just tried running all tests and notice that at least
>>> IntegrationTestIngestWithEcnryption fails reliably with the below. Is
>> this
>>> expected? Does this test require a particular context setup first to run,
>>> one that is not present normally? Once it has run, I cannot get past it.
>>> If I restart servers, they fail on log replay with same exception. Is
>> this
>>> expected?
>>> 
>>> On IntegrationTests in general, I was thinking they should generally pass
>>> and that it is a bug if they do not.  Is that what others think?
>>> 
>>> Has anyone been running IT tests regularly? If so, what has been your
>>> experience? Or if you have been running it tests, do you run individual
>>> tests?
>>> 
>>> Thanks. Just trying to figure what current understanding of their state
>> is
>>> before digging in.
>>> Yours,
>>> St.Ack
>>> 
>>> 2014-11-29 21:19:06,447 DEBUG
>>> [B.defaultRpcServer.handler=14,queue=2,port=16020] regionserver.HRegion:
>>> Flush requested on
>> IntegrationTestIngestWithEncryption,cccccccc,1417324570806.325074071473eb28b662f7d694e8c609.
>>> 2014-11-29 21:19:06,640 FATAL [MemStoreFlusher.1]
>>> regionserver.HRegionServer: ABORTING region server
>>> c2021.halxg.cloudera.com,16020,1417323325816:
>>> Replay of HLog required. Forcing server shutdown
>>> org.apache.hadoop.hbase.DroppedSnapshotException: region:
>> IntegrationTestIngestWithEncryption,,1417324570806.55e317f49ddf8ed35323d9b675611548.
>>>        at
>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1964)
>>>        at
>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1730)
>>>        at
>> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1662)
>>>        at
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:434)
>>>        at
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
>>>        at
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
>>>        at
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
>>>        at java.lang.Thread.run(Thread.java:745)
>>> Caused by: java.lang.RuntimeException: java.lang.RuntimeException:
>>> KeyProvider scheme should specify KeyStore type
>>>        at
>> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:535)
>>>        at
>> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:425)
>>>        at
>> org.apache.hadoop.hbase.io.crypto.Encryption.encryptWithSubjectKey(Encryption.java:449)
>>>        at
>> org.apache.hadoop.hbase.security.EncryptionUtil.wrapKey(EncryptionUtil.java:92)
>>>        at
>> org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:127)
>>>        at
>> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:366)
>>>        at
>> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:986)
>>>        at
>> org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:67)
>>>        at
>> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:80)
>>>        at
>>> org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:871)
>>>        at
>> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2080)
>>>        at
>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1918)
>>>        ... 7 more
>>> Caused by: java.lang.RuntimeException: KeyProvider scheme should specify
>>> KeyStore type
>>>        at
>> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:142)
>>>        at
>> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:528)
>>>        ... 18 more
>> 
>> 
>> 
>> --
>> Best regards,
>> 
>>   - Andy
>> 
>> Problems worthy of attack prove their worth by hitting back. - Piet Hein
>> (via Tom White)
>> 

Re: What is our current understanding regards state of integration tests?

Posted by Stack <st...@duboce.net>.
On Sun, Nov 30, 2014 at 7:54 PM, Andrew Purtell <ap...@apache.org> wrote:

> Yes, you have to set up cluster configuration for encryption or the IT
> won't work.


Ok. Thanks. I'll add note to the IT section that wildcard will run at least
this test that requires encryption setup.

My cluster won't start now after this IT test runs.  That is to be expected?




> Encryption requires creating a keystore, etc.  If you run the
> test as all-localhost, using mvn verify, it should pass, because all
> configuration can be programatically set up and deployed since the
> environment is a mini cluster. You can see how the test sets up the
> necessary configuration using HTU for that case.
>
>
>
Thanks. Will add note on above and pointers to the security section of the
refguide (as per your note after this one).

St.Ack



> On Sun, Nov 30, 2014 at 6:52 PM, Stack <st...@duboce.net> wrote:
>
> > I just tried running all tests and notice that at least
> > IntegrationTestIngestWithEcnryption fails reliably with the below. Is
> this
> > expected? Does this test require a particular context setup first to run,
> > one that is not present normally? Once it has run, I cannot get past it.
> > If I restart servers, they fail on log replay with same exception. Is
> this
> > expected?
> >
> > On IntegrationTests in general, I was thinking they should generally pass
> > and that it is a bug if they do not.  Is that what others think?
> >
> > Has anyone been running IT tests regularly? If so, what has been your
> > experience? Or if you have been running it tests, do you run individual
> > tests?
> >
> > Thanks. Just trying to figure what current understanding of their state
> is
> > before digging in.
> > Yours,
> > St.Ack
> >
> > 2014-11-29 21:19:06,447 DEBUG
> > [B.defaultRpcServer.handler=14,queue=2,port=16020] regionserver.HRegion:
> > Flush requested on
> >
> >
> IntegrationTestIngestWithEncryption,cccccccc,1417324570806.325074071473eb28b662f7d694e8c609.
> > 2014-11-29 21:19:06,640 FATAL [MemStoreFlusher.1]
> > regionserver.HRegionServer: ABORTING region server
> > c2021.halxg.cloudera.com,16020,1417323325816:
> > Replay of HLog required. Forcing server shutdown
> > org.apache.hadoop.hbase.DroppedSnapshotException: region:
> >
> >
> IntegrationTestIngestWithEncryption,,1417324570806.55e317f49ddf8ed35323d9b675611548.
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1964)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1730)
> >         at
> >
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1662)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:434)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
> >         at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.lang.RuntimeException: java.lang.RuntimeException:
> > KeyProvider scheme should specify KeyStore type
> >         at
> >
> >
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:535)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:425)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.crypto.Encryption.encryptWithSubjectKey(Encryption.java:449)
> >         at
> >
> >
> org.apache.hadoop.hbase.security.EncryptionUtil.wrapKey(EncryptionUtil.java:92)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:127)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:366)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:986)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:67)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:80)
> >         at
> > org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:871)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2080)
> >         at
> >
> >
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1918)
> >         ... 7 more
> > Caused by: java.lang.RuntimeException: KeyProvider scheme should specify
> > KeyStore type
> >         at
> >
> >
> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:142)
> >         at
> >
> >
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:528)
> >         ... 18 more
> >
>
>
>
> --
> Best regards,
>
>    - Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>

Re: What is our current understanding regards state of integration tests?

Posted by Andrew Purtell <ap...@apache.org>.
If you want to run IntegrationTestIngestWithEncryption, follow the
instructions in the security section of the manual when setting up the test
cluster first. It's not a bug that some configuration is necessary.

hbase-it doesn't have much deployment tooling. We ssh around here and there
via ClusterManager. We could do more. Maybe with Puppet?

On Sun, Nov 30, 2014 at 10:54 PM, Andrew Purtell <ap...@apache.org>
wrote:

> Yes, you have to set up cluster configuration for encryption or the IT
> won't work. Encryption requires creating a keystore, etc.  If you run the
> test as all-localhost, using mvn verify, it should pass, because all
> configuration can be programatically set up and deployed since the
> environment is a mini cluster. You can see how the test sets up the
> necessary configuration using HTU for that case.
>
>
> On Sun, Nov 30, 2014 at 6:52 PM, Stack <st...@duboce.net> wrote:
>
>> I just tried running all tests and notice that at least
>> IntegrationTestIngestWithEcnryption fails reliably with the below. Is this
>> expected? Does this test require a particular context setup first to run,
>> one that is not present normally? Once it has run, I cannot get past it.
>> If I restart servers, they fail on log replay with same exception. Is this
>> expected?
>>
>> On IntegrationTests in general, I was thinking they should generally pass
>> and that it is a bug if they do not.  Is that what others think?
>>
>> Has anyone been running IT tests regularly? If so, what has been your
>> experience? Or if you have been running it tests, do you run individual
>> tests?
>>
>> Thanks. Just trying to figure what current understanding of their state is
>> before digging in.
>> Yours,
>> St.Ack
>>
>> 2014-11-29 21:19:06,447 DEBUG
>> [B.defaultRpcServer.handler=14,queue=2,port=16020] regionserver.HRegion:
>> Flush requested on
>>
>> IntegrationTestIngestWithEncryption,cccccccc,1417324570806.325074071473eb28b662f7d694e8c609.
>> 2014-11-29 21:19:06,640 FATAL [MemStoreFlusher.1]
>> regionserver.HRegionServer: ABORTING region server
>> c2021.halxg.cloudera.com,16020,1417323325816:
>> Replay of HLog required. Forcing server shutdown
>> org.apache.hadoop.hbase.DroppedSnapshotException: region:
>>
>> IntegrationTestIngestWithEncryption,,1417324570806.55e317f49ddf8ed35323d9b675611548.
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1964)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1730)
>>         at
>> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1662)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:434)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
>>         at java.lang.Thread.run(Thread.java:745)
>> Caused by: java.lang.RuntimeException: java.lang.RuntimeException:
>> KeyProvider scheme should specify KeyStore type
>>         at
>>
>> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:535)
>>         at
>>
>> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:425)
>>         at
>>
>> org.apache.hadoop.hbase.io.crypto.Encryption.encryptWithSubjectKey(Encryption.java:449)
>>         at
>>
>> org.apache.hadoop.hbase.security.EncryptionUtil.wrapKey(EncryptionUtil.java:92)
>>         at
>>
>> org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:127)
>>         at
>>
>> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:366)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:986)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:67)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:80)
>>         at
>> org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:871)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2080)
>>         at
>>
>> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1918)
>>         ... 7 more
>> Caused by: java.lang.RuntimeException: KeyProvider scheme should specify
>> KeyStore type
>>         at
>>
>> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:142)
>>         at
>>
>> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:528)
>>         ... 18 more
>>
>
>
>
> --
> Best regards,
>
>    - Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)

Re: What is our current understanding regards state of integration tests?

Posted by Andrew Purtell <ap...@apache.org>.
Yes, you have to set up cluster configuration for encryption or the IT
won't work. Encryption requires creating a keystore, etc.  If you run the
test as all-localhost, using mvn verify, it should pass, because all
configuration can be programatically set up and deployed since the
environment is a mini cluster. You can see how the test sets up the
necessary configuration using HTU for that case.


On Sun, Nov 30, 2014 at 6:52 PM, Stack <st...@duboce.net> wrote:

> I just tried running all tests and notice that at least
> IntegrationTestIngestWithEcnryption fails reliably with the below. Is this
> expected? Does this test require a particular context setup first to run,
> one that is not present normally? Once it has run, I cannot get past it.
> If I restart servers, they fail on log replay with same exception. Is this
> expected?
>
> On IntegrationTests in general, I was thinking they should generally pass
> and that it is a bug if they do not.  Is that what others think?
>
> Has anyone been running IT tests regularly? If so, what has been your
> experience? Or if you have been running it tests, do you run individual
> tests?
>
> Thanks. Just trying to figure what current understanding of their state is
> before digging in.
> Yours,
> St.Ack
>
> 2014-11-29 21:19:06,447 DEBUG
> [B.defaultRpcServer.handler=14,queue=2,port=16020] regionserver.HRegion:
> Flush requested on
>
> IntegrationTestIngestWithEncryption,cccccccc,1417324570806.325074071473eb28b662f7d694e8c609.
> 2014-11-29 21:19:06,640 FATAL [MemStoreFlusher.1]
> regionserver.HRegionServer: ABORTING region server
> c2021.halxg.cloudera.com,16020,1417323325816:
> Replay of HLog required. Forcing server shutdown
> org.apache.hadoop.hbase.DroppedSnapshotException: region:
>
> IntegrationTestIngestWithEncryption,,1417324570806.55e317f49ddf8ed35323d9b675611548.
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1964)
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1730)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:1662)
>         at
>
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:434)
>         at
>
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:407)
>         at
>
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:69)
>         at
>
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:225)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: java.lang.RuntimeException:
> KeyProvider scheme should specify KeyStore type
>         at
>
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:535)
>         at
>
> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:425)
>         at
>
> org.apache.hadoop.hbase.io.crypto.Encryption.encryptWithSubjectKey(Encryption.java:449)
>         at
>
> org.apache.hadoop.hbase.security.EncryptionUtil.wrapKey(EncryptionUtil.java:92)
>         at
>
> org.apache.hadoop.hbase.io.hfile.HFileWriterV3.finishClose(HFileWriterV3.java:127)
>         at
>
> org.apache.hadoop.hbase.io.hfile.HFileWriterV2.close(HFileWriterV2.java:366)
>         at
>
> org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:986)
>         at
>
> org.apache.hadoop.hbase.regionserver.StoreFlusher.finalizeWriter(StoreFlusher.java:67)
>         at
>
> org.apache.hadoop.hbase.regionserver.DefaultStoreFlusher.flushSnapshot(DefaultStoreFlusher.java:80)
>         at
> org.apache.hadoop.hbase.regionserver.HStore.flushCache(HStore.java:871)
>         at
>
> org.apache.hadoop.hbase.regionserver.HStore$StoreFlusherImpl.flushCache(HStore.java:2080)
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:1918)
>         ... 7 more
> Caused by: java.lang.RuntimeException: KeyProvider scheme should specify
> KeyStore type
>         at
>
> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:142)
>         at
>
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:528)
>         ... 18 more
>



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)