You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Mich Talebzadeh <mi...@gmail.com> on 2018/06/06 18:52:16 UTC

Problem starting region server with Hbase version hbase-2.0.0

Hi,

I have an old Hbase hbase-1.2.3 that runs fine on both RHES 5.6 and RHES 7.5

I created a new Hbase hbase-2.0.0 instance on RHES 7.5.

I seem to have a problem with my region server as it fails to start
throwing error

2018-06-06 19:28:37,033 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: CompactionChecker runs every PT10S
2018-06-06 19:28:37,071 INFO  [SplitLogWorker-rhes75:16020]
regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528309715572
starting
2018-06-06 19:28:37,073 INFO  [regionserver/rhes75:16020]
regionserver.HeapMemoryManager: Starting, tuneOn=false
2018-06-06 19:28:37,076 INFO  [regionserver/rhes75:16020]
regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk
size 2 MB, max count 2880, initial count 0
2018-06-06 19:28:37,077 INFO  [regionserver/rhes75:16020]
regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk
size 204.80 KB, max count 3200, initial count 0
2018-06-06 19:28:37,078 INFO  [ReplicationExecutor-0]
regionserver.ReplicationSourceManager: Current list of replicators:
[rhes75,16020,1528309715572] other RSs: [rhes75,16020,1528309715572]
2018-06-06 19:28:37,099 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: Serving as rhes75,16020,1528309715572,
RpcServer on rhes75/50.140.197.220:16020, sessionid=0x163d61b308c0033
2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
quotas.RegionServerRpcQuotaManager: Quota support disabled
2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
quotas.RegionServerSpaceQuotaManager: Quota support disabled, not starting
space quota manager.
2018-06-06 19:28:40,133 INFO  [regionserver/rhes75:16020]
wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
prefix=rhes75%2C16020%2C1528309715572, suffix=,
logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
8309715572, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
2018-06-06 19:28:40,251 ERROR [regionserver/rhes75:16020]
regionserver.HRegionServer: ***** ABORTING region server
rhes75,16020,1528309715572: Unhandled: Unable to find
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
***

I cannot seem to be able to fix this even after removing hbase directory
from hdfs and zookeeper! Any ideas will be appreciated.

Thanks

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Josh Elser <el...@apache.org>.
You shouldn't be putting the phoenix-client.jar on the HBase server 
classpath.

There is specifically the phoenix-server.jar which is specifically built 
to be included in HBase (to avoid issues such as these).

Please remove all phoenix-client jars and provide the 
phoenix-5.0.0-server jar instead.

On 6/7/18 5:06 PM, Mich Talebzadeh wrote:
> Thanks.
> 
> under $HBASE_HOME/lib for version 2 I swapped the phoenix client jar file
> as below
> 
> phoenix-5.0.0-alpha-HBase-2.0-client.jar_ori
> phoenix-4.8.1-HBase-1.2-client.jar
> 
> I then started HBASE-2 that worked fine.
> 
> For Hbase clients, i.e. the Hbase  connection from edge nodes etc, I will
> keep using HBASE-1.2.6 which is the stable version and it connects
> successfully to Hbase-2. This appears to be a working solution for now.
> 
> Regards
> 
> Dr Mich Talebzadeh
> 
> 
> 
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
> 
> 
> 
> http://talebzadehmich.wordpress.com
> 
> 
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
> 
> 
> 
> On 7 June 2018 at 21:03, Sean Busbey <bu...@apache.org> wrote:
> 
>> Your current problem is caused by this phoenix jar:
>>
>>
>>> hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
>>> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
>>> StreamCapabilities
>>> ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
>>> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
>>> org/apache/hadoop/fs/StreamCapabilities.class
>>> org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
>>
>> I don't know what version of Hadoop it's bundling or why, but it's one
>> that includes the StreamCapabilities interface, so HBase takes that to
>> mean it can check on capabilities. Since Hadoop 2.7 doesn't claim to
>> implement any, HBase throws its hands up.
>>
>> I'd recommend you ask on the phoenix list how to properly install
>> phoenix such that you don't need to copy the jars into the HBase
>> installation. Hopefully the jar pointed out here is meant to be client
>> facing only and not installed into the HBase cluster.
>>
>>
>> On Thu, Jun 7, 2018 at 2:38 PM, Mich Talebzadeh
>> <mi...@gmail.com> wrote:
>>> Hi,
>>>
>>> Under Hbase Home directory I get
>>>
>>> hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
>>> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
>>> StreamCapabilities
>>> ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
>>> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
>>> org/apache/hadoop/fs/StreamCapabilities.class
>>> org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
>>> --
>>> ./lib/hbase-common-2.0.0.jar
>>> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
>>>
>>> for Hadoop home directory I get nothing
>>>
>>> hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
>>> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
>>> StreamCapabilities
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>> OABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>>> loss, damage or destruction of data or any other property which may arise
>>> from relying on this email's technical content is explicitly disclaimed.
>>> The author will in no case be liable for any monetary damages arising
>> from
>>> such loss, damage or destruction.
>>>
>>>
>>>
>>> On 7 June 2018 at 15:39, Sean Busbey <bu...@apache.org> wrote:
>>>
>>>> Somehow, HBase is getting confused by your installation and thinks it
>>>> can check for wether or not the underlying FileSystem implementation
>>>> (i.e. HDFS) provides hflush/hsync even though that ability is not
>>>> present in Hadoop 2.7. Usually this means there's a mix of Hadoop
>>>> versions on the classpath. While you do have both Hadoop 2.7.3 and
>>>> 2.7.4, that mix shouldn't cause this kind of failure[1].
>>>>
>>>> Please run this command and copy/paste the output in your HBase and
>>>> Hadoop installation directories:
>>>>
>>>> find . -name '*.jar' -print -exec jar tf {} \; | grep -E
>>>> "\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
>>>>
>>>>
>>>>
>>>> [1]: As an aside, you should follow the guidance in our reference
>>>> guide from the section "Replace the Hadoop Bundled With HBase!" in the
>>>> Hadoop chapter: http://hbase.apache.org/book.html#hadoop
>>>>
>>>> But as I mentioned, I don't think it's the underlying cause in this
>> case.
>>>>
>>>> On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
>>>> <mi...@gmail.com> wrote:
>>>>> Hi,
>>>>>
>>>>> Please find below
>>>>>
>>>>> *bin/hbase version*
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in
>>>>> [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
>>>> HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>> [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
>>>> 25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>> [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
>>>> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>>> explanation.
>>>>> HBase 2.0.0
>>>>> Source code repository git://
>>>>> kalashnikov.att.net/Users/stack/checkouts/hbase.git
>>>>> revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
>>>>> Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
>>>>>  From source with checksum a59e806496ef216732e730c746bbe5ac
>>>>>
>>>>> *l**s -lah lib/hadoop**
>>>>> -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
>>>>> lib/hadoop-annotations-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29
>> lib/hadoop-client-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
>>>>> lib/hadoop-common-2.7.4-tests.jar
>>>>> -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26
>> lib/hadoop-common-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29
>> lib/hadoop-distcp-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29
>> lib/hadoop-hdfs-2.7.4-tests.
>>>> jar
>>>>> -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
>>>>> lib/hadoop-mapreduce-client-app-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
>>>>> lib/hadoop-mapreduce-client-common-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
>>>>> lib/hadoop-mapreduce-client-core-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
>>>>> lib/hadoop-mapreduce-client-hs-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
>>>>> lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
>>>>> lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
>>>>> lib/hadoop-minicluster-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
>>>> lib/hadoop-yarn-api-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
>>>>> lib/hadoop-yarn-client-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
>>>>> lib/hadoop-yarn-common-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
>>>>> lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
>>>>> lib/hadoop-yarn-server-common-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
>>>>> lib/hadoop-yarn-server-nodemanager-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
>>>>> lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
>>>>> lib/hadoop-yarn-server-tests-2.7.4-tests.jar
>>>>> -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
>>>>> lib/hadoop-yarn-server-web-proxy-2.7.4.jar
>>>>>
>>>>> Also I am on Hadoop 2.7.3
>>>>>
>>>>> *hadoop version*
>>>>> Hadoop 2.7.3
>>>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>>>>> baa91f7c6bc9cb92be5982de4719c1c8af91ccff
>>>>> Compiled by root on 2016-08-18T01:41Z
>>>>> Compiled with protoc 2.5.0
>>>>>  From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
>>>>> This command was run using
>>>>> /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
>>>>>
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=
>>>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>>>> OABUrV8Pw>*
>>>>>
>>>>>
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any
>>>>> loss, damage or destruction of data or any other property which may
>> arise
>>>>> from relying on this email's technical content is explicitly
>> disclaimed.
>>>>> The author will in no case be liable for any monetary damages arising
>>>> from
>>>>> such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>> On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
>>>>>
>>>>>> HBase needs HDFS syncs to avoid dataloss during component failure.
>>>>>>
>>>>>> What's the output of the command "bin/hbase version"?
>>>>>>
>>>>>>
>>>>>> What's the result of doing the following in the hbase install?
>>>>>>
>>>>>> ls -lah lib/hadoop*
>>>>>>
>>>>>> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com>
>>>> wrote:
>>>>>>
>>>>>> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
>>>>>>
>>>>>> The file system is ext4.
>>>>>>
>>>>>> I was hoping that I can avoid the sync option,
>>>>>>
>>>>>> many thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>> Dr Mich Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=
>>>>>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>>> <https://www.linkedin.com/profile/view?id=
>>>> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>>>>>> OABUrV8Pw>*
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://talebzadehmich.wordpress.com
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any
>>>>>> loss, damage or destruction of data or any other property which may
>>>> arise
>>>>>> from relying on this email's technical content is explicitly
>> disclaimed.
>>>>>> The author will in no case be liable for any monetary damages arising
>>>> from
>>>>>> such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
>>>>>>
>>>>>>> On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
>>>>>>> <mi...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> so the region server started OK but then I had a problem with
>>>> master :(
>>>>>>>>
>>>>>>>> java.lang.IllegalStateException: The procedure WAL relies on the
>>>>>>> ability to
>>>>>>>> hsync for proper operation during component failures, but the
>>>>>> underlying
>>>>>>>> filesystem does not support doing so. Please check the config
>> value
>>>> of
>>>>>>>> 'hbase.procedure.store.wal.use.hsync' to set the desired level
>> of
>>>>>>>> robustness and ensure the config value of 'hbase.wal.dir' points
>> to
>>>> a
>>>>>>>> FileSystem mount that can provide it.
>>>>>>>>
>>>>>>>
>>>>>>> This error means that you're running on top of a Filesystem that
>>>>>>> doesn't provide sync.
>>>>>>>
>>>>>>> Are you using HDFS? What version?
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>
>>
> 

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Josh Elser <el...@apache.org>.
You shouldn't be putting the phoenix-client.jar on the HBase server 
classpath.

There is specifically the phoenix-server.jar which is specifically built 
to be included in HBase (to avoid issues such as these).

Please remove all phoenix-client jars and provide the 
phoenix-5.0.0-server jar instead.

On 6/7/18 5:06 PM, Mich Talebzadeh wrote:
> Thanks.
> 
> under $HBASE_HOME/lib for version 2 I swapped the phoenix client jar file
> as below
> 
> phoenix-5.0.0-alpha-HBase-2.0-client.jar_ori
> phoenix-4.8.1-HBase-1.2-client.jar
> 
> I then started HBASE-2 that worked fine.
> 
> For Hbase clients, i.e. the Hbase  connection from edge nodes etc, I will
> keep using HBASE-1.2.6 which is the stable version and it connects
> successfully to Hbase-2. This appears to be a working solution for now.
> 
> Regards
> 
> Dr Mich Talebzadeh
> 
> 
> 
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
> 
> 
> 
> http://talebzadehmich.wordpress.com
> 
> 
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
> 
> 
> 
> On 7 June 2018 at 21:03, Sean Busbey <bu...@apache.org> wrote:
> 
>> Your current problem is caused by this phoenix jar:
>>
>>
>>> hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
>>> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
>>> StreamCapabilities
>>> ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
>>> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
>>> org/apache/hadoop/fs/StreamCapabilities.class
>>> org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
>>
>> I don't know what version of Hadoop it's bundling or why, but it's one
>> that includes the StreamCapabilities interface, so HBase takes that to
>> mean it can check on capabilities. Since Hadoop 2.7 doesn't claim to
>> implement any, HBase throws its hands up.
>>
>> I'd recommend you ask on the phoenix list how to properly install
>> phoenix such that you don't need to copy the jars into the HBase
>> installation. Hopefully the jar pointed out here is meant to be client
>> facing only and not installed into the HBase cluster.
>>
>>
>> On Thu, Jun 7, 2018 at 2:38 PM, Mich Talebzadeh
>> <mi...@gmail.com> wrote:
>>> Hi,
>>>
>>> Under Hbase Home directory I get
>>>
>>> hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
>>> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
>>> StreamCapabilities
>>> ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
>>> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
>>> org/apache/hadoop/fs/StreamCapabilities.class
>>> org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
>>> --
>>> ./lib/hbase-common-2.0.0.jar
>>> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
>>>
>>> for Hadoop home directory I get nothing
>>>
>>> hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
>>> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
>>> StreamCapabilities
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>> OABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>>> loss, damage or destruction of data or any other property which may arise
>>> from relying on this email's technical content is explicitly disclaimed.
>>> The author will in no case be liable for any monetary damages arising
>> from
>>> such loss, damage or destruction.
>>>
>>>
>>>
>>> On 7 June 2018 at 15:39, Sean Busbey <bu...@apache.org> wrote:
>>>
>>>> Somehow, HBase is getting confused by your installation and thinks it
>>>> can check for wether or not the underlying FileSystem implementation
>>>> (i.e. HDFS) provides hflush/hsync even though that ability is not
>>>> present in Hadoop 2.7. Usually this means there's a mix of Hadoop
>>>> versions on the classpath. While you do have both Hadoop 2.7.3 and
>>>> 2.7.4, that mix shouldn't cause this kind of failure[1].
>>>>
>>>> Please run this command and copy/paste the output in your HBase and
>>>> Hadoop installation directories:
>>>>
>>>> find . -name '*.jar' -print -exec jar tf {} \; | grep -E
>>>> "\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
>>>>
>>>>
>>>>
>>>> [1]: As an aside, you should follow the guidance in our reference
>>>> guide from the section "Replace the Hadoop Bundled With HBase!" in the
>>>> Hadoop chapter: http://hbase.apache.org/book.html#hadoop
>>>>
>>>> But as I mentioned, I don't think it's the underlying cause in this
>> case.
>>>>
>>>> On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
>>>> <mi...@gmail.com> wrote:
>>>>> Hi,
>>>>>
>>>>> Please find below
>>>>>
>>>>> *bin/hbase version*
>>>>> SLF4J: Class path contains multiple SLF4J bindings.
>>>>> SLF4J: Found binding in
>>>>> [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
>>>> HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>> [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
>>>> 25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: Found binding in
>>>>> [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
>>>> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>>>>> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>>>>> explanation.
>>>>> HBase 2.0.0
>>>>> Source code repository git://
>>>>> kalashnikov.att.net/Users/stack/checkouts/hbase.git
>>>>> revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
>>>>> Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
>>>>>  From source with checksum a59e806496ef216732e730c746bbe5ac
>>>>>
>>>>> *l**s -lah lib/hadoop**
>>>>> -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
>>>>> lib/hadoop-annotations-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29
>> lib/hadoop-client-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
>>>>> lib/hadoop-common-2.7.4-tests.jar
>>>>> -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26
>> lib/hadoop-common-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29
>> lib/hadoop-distcp-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29
>> lib/hadoop-hdfs-2.7.4-tests.
>>>> jar
>>>>> -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
>>>>> lib/hadoop-mapreduce-client-app-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
>>>>> lib/hadoop-mapreduce-client-common-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
>>>>> lib/hadoop-mapreduce-client-core-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
>>>>> lib/hadoop-mapreduce-client-hs-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
>>>>> lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
>>>>> lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
>>>>> lib/hadoop-minicluster-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
>>>> lib/hadoop-yarn-api-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
>>>>> lib/hadoop-yarn-client-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
>>>>> lib/hadoop-yarn-common-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
>>>>> lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
>>>>> lib/hadoop-yarn-server-common-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
>>>>> lib/hadoop-yarn-server-nodemanager-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
>>>>> lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
>>>>> -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
>>>>> lib/hadoop-yarn-server-tests-2.7.4-tests.jar
>>>>> -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
>>>>> lib/hadoop-yarn-server-web-proxy-2.7.4.jar
>>>>>
>>>>> Also I am on Hadoop 2.7.3
>>>>>
>>>>> *hadoop version*
>>>>> Hadoop 2.7.3
>>>>> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>>>>> baa91f7c6bc9cb92be5982de4719c1c8af91ccff
>>>>> Compiled by root on 2016-08-18T01:41Z
>>>>> Compiled with protoc 2.5.0
>>>>>  From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
>>>>> This command was run using
>>>>> /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
>>>>>
>>>>>
>>>>> Dr Mich Talebzadeh
>>>>>
>>>>>
>>>>>
>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=
>>>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>> <https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>>>> OABUrV8Pw>*
>>>>>
>>>>>
>>>>>
>>>>> http://talebzadehmich.wordpress.com
>>>>>
>>>>>
>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any
>>>>> loss, damage or destruction of data or any other property which may
>> arise
>>>>> from relying on this email's technical content is explicitly
>> disclaimed.
>>>>> The author will in no case be liable for any monetary damages arising
>>>> from
>>>>> such loss, damage or destruction.
>>>>>
>>>>>
>>>>>
>>>>> On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
>>>>>
>>>>>> HBase needs HDFS syncs to avoid dataloss during component failure.
>>>>>>
>>>>>> What's the output of the command "bin/hbase version"?
>>>>>>
>>>>>>
>>>>>> What's the result of doing the following in the hbase install?
>>>>>>
>>>>>> ls -lah lib/hadoop*
>>>>>>
>>>>>> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com>
>>>> wrote:
>>>>>>
>>>>>> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
>>>>>>
>>>>>> The file system is ext4.
>>>>>>
>>>>>> I was hoping that I can avoid the sync option,
>>>>>>
>>>>>> many thanks
>>>>>>
>>>>>>
>>>>>>
>>>>>> Dr Mich Talebzadeh
>>>>>>
>>>>>>
>>>>>>
>>>>>> LinkedIn * https://www.linkedin.com/profile/view?id=
>>>>>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>>>> <https://www.linkedin.com/profile/view?id=
>>>> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>>>>>> OABUrV8Pw>*
>>>>>>
>>>>>>
>>>>>>
>>>>>> http://talebzadehmich.wordpress.com
>>>>>>
>>>>>>
>>>>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>>> any
>>>>>> loss, damage or destruction of data or any other property which may
>>>> arise
>>>>>> from relying on this email's technical content is explicitly
>> disclaimed.
>>>>>> The author will in no case be liable for any monetary damages arising
>>>> from
>>>>>> such loss, damage or destruction.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
>>>>>>
>>>>>>> On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
>>>>>>> <mi...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> so the region server started OK but then I had a problem with
>>>> master :(
>>>>>>>>
>>>>>>>> java.lang.IllegalStateException: The procedure WAL relies on the
>>>>>>> ability to
>>>>>>>> hsync for proper operation during component failures, but the
>>>>>> underlying
>>>>>>>> filesystem does not support doing so. Please check the config
>> value
>>>> of
>>>>>>>> 'hbase.procedure.store.wal.use.hsync' to set the desired level
>> of
>>>>>>>> robustness and ensure the config value of 'hbase.wal.dir' points
>> to
>>>> a
>>>>>>>> FileSystem mount that can provide it.
>>>>>>>>
>>>>>>>
>>>>>>> This error means that you're running on top of a Filesystem that
>>>>>>> doesn't provide sync.
>>>>>>>
>>>>>>> Are you using HDFS? What version?
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>
>>
> 

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks.

under $HBASE_HOME/lib for version 2 I swapped the phoenix client jar file
as below

phoenix-5.0.0-alpha-HBase-2.0-client.jar_ori
phoenix-4.8.1-HBase-1.2-client.jar

I then started HBASE-2 that worked fine.

For Hbase clients, i.e. the Hbase  connection from edge nodes etc, I will
keep using HBASE-1.2.6 which is the stable version and it connects
successfully to Hbase-2. This appears to be a working solution for now.

Regards

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 21:03, Sean Busbey <bu...@apache.org> wrote:

> Your current problem is caused by this phoenix jar:
>
>
> > hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
> > -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> > StreamCapabilities
> > ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
> > org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> > org/apache/hadoop/fs/StreamCapabilities.class
> > org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
>
> I don't know what version of Hadoop it's bundling or why, but it's one
> that includes the StreamCapabilities interface, so HBase takes that to
> mean it can check on capabilities. Since Hadoop 2.7 doesn't claim to
> implement any, HBase throws its hands up.
>
> I'd recommend you ask on the phoenix list how to properly install
> phoenix such that you don't need to copy the jars into the HBase
> installation. Hopefully the jar pointed out here is meant to be client
> facing only and not installed into the HBase cluster.
>
>
> On Thu, Jun 7, 2018 at 2:38 PM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> > Hi,
> >
> > Under Hbase Home directory I get
> >
> > hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
> > -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> > StreamCapabilities
> > ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
> > org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> > org/apache/hadoop/fs/StreamCapabilities.class
> > org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
> > --
> > ./lib/hbase-common-2.0.0.jar
> > org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> >
> > for Hadoop home directory I get nothing
> >
> > hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
> > -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> > StreamCapabilities
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> > On 7 June 2018 at 15:39, Sean Busbey <bu...@apache.org> wrote:
> >
> >> Somehow, HBase is getting confused by your installation and thinks it
> >> can check for wether or not the underlying FileSystem implementation
> >> (i.e. HDFS) provides hflush/hsync even though that ability is not
> >> present in Hadoop 2.7. Usually this means there's a mix of Hadoop
> >> versions on the classpath. While you do have both Hadoop 2.7.3 and
> >> 2.7.4, that mix shouldn't cause this kind of failure[1].
> >>
> >> Please run this command and copy/paste the output in your HBase and
> >> Hadoop installation directories:
> >>
> >> find . -name '*.jar' -print -exec jar tf {} \; | grep -E
> >> "\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
> >>
> >>
> >>
> >> [1]: As an aside, you should follow the guidance in our reference
> >> guide from the section "Replace the Hadoop Bundled With HBase!" in the
> >> Hadoop chapter: http://hbase.apache.org/book.html#hadoop
> >>
> >> But as I mentioned, I don't think it's the underlying cause in this
> case.
> >>
> >> On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
> >> <mi...@gmail.com> wrote:
> >> > Hi,
> >> >
> >> > Please find below
> >> >
> >> > *bin/hbase version*
> >> > SLF4J: Class path contains multiple SLF4J bindings.
> >> > SLF4J: Found binding in
> >> > [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
> >> HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> > SLF4J: Found binding in
> >> > [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
> >> 25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> > SLF4J: Found binding in
> >> > [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
> >> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> >> > explanation.
> >> > HBase 2.0.0
> >> > Source code repository git://
> >> > kalashnikov.att.net/Users/stack/checkouts/hbase.git
> >> > revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
> >> > Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
> >> > From source with checksum a59e806496ef216732e730c746bbe5ac
> >> >
> >> > *l**s -lah lib/hadoop**
> >> > -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
> >> > lib/hadoop-annotations-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29
> lib/hadoop-client-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
> >> > lib/hadoop-common-2.7.4-tests.jar
> >> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26
> lib/hadoop-common-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29
> lib/hadoop-distcp-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29
> lib/hadoop-hdfs-2.7.4-tests.
> >> jar
> >> > -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
> >> > lib/hadoop-mapreduce-client-app-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
> >> > lib/hadoop-mapreduce-client-common-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
> >> > lib/hadoop-mapreduce-client-core-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
> >> > lib/hadoop-mapreduce-client-hs-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
> >> > lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
> >> > lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
> >> > lib/hadoop-minicluster-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
> >> lib/hadoop-yarn-api-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
> >> > lib/hadoop-yarn-client-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
> >> > lib/hadoop-yarn-common-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
> >> > lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
> >> > lib/hadoop-yarn-server-common-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
> >> > lib/hadoop-yarn-server-nodemanager-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
> >> > lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
> >> > lib/hadoop-yarn-server-tests-2.7.4-tests.jar
> >> > -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
> >> > lib/hadoop-yarn-server-web-proxy-2.7.4.jar
> >> >
> >> > Also I am on Hadoop 2.7.3
> >> >
> >> > *hadoop version*
> >> > Hadoop 2.7.3
> >> > Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> >> > baa91f7c6bc9cb92be5982de4719c1c8af91ccff
> >> > Compiled by root on 2016-08-18T01:41Z
> >> > Compiled with protoc 2.5.0
> >> > From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
> >> > This command was run using
> >> > /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
> >> >
> >> >
> >> > Dr Mich Talebzadeh
> >> >
> >> >
> >> >
> >> > LinkedIn * https://www.linkedin.com/profile/view?id=
> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >> > <https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> >> OABUrV8Pw>*
> >> >
> >> >
> >> >
> >> > http://talebzadehmich.wordpress.com
> >> >
> >> >
> >> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >> > loss, damage or destruction of data or any other property which may
> arise
> >> > from relying on this email's technical content is explicitly
> disclaimed.
> >> > The author will in no case be liable for any monetary damages arising
> >> from
> >> > such loss, damage or destruction.
> >> >
> >> >
> >> >
> >> > On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
> >> >
> >> >> HBase needs HDFS syncs to avoid dataloss during component failure.
> >> >>
> >> >> What's the output of the command "bin/hbase version"?
> >> >>
> >> >>
> >> >> What's the result of doing the following in the hbase install?
> >> >>
> >> >> ls -lah lib/hadoop*
> >> >>
> >> >> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com>
> >> wrote:
> >> >>
> >> >> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
> >> >>
> >> >> The file system is ext4.
> >> >>
> >> >> I was hoping that I can avoid the sync option,
> >> >>
> >> >> many thanks
> >> >>
> >> >>
> >> >>
> >> >> Dr Mich Talebzadeh
> >> >>
> >> >>
> >> >>
> >> >> LinkedIn * https://www.linkedin.com/profile/view?id=
> >> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >> >> <https://www.linkedin.com/profile/view?id=
> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> >> >> OABUrV8Pw>*
> >> >>
> >> >>
> >> >>
> >> >> http://talebzadehmich.wordpress.com
> >> >>
> >> >>
> >> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> >> any
> >> >> loss, damage or destruction of data or any other property which may
> >> arise
> >> >> from relying on this email's technical content is explicitly
> disclaimed.
> >> >> The author will in no case be liable for any monetary damages arising
> >> from
> >> >> such loss, damage or destruction.
> >> >>
> >> >>
> >> >>
> >> >> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
> >> >>
> >> >> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> >> >> > <mi...@gmail.com> wrote:
> >> >> > >
> >> >> > >
> >> >> > > so the region server started OK but then I had a problem with
> >> master :(
> >> >> > >
> >> >> > > java.lang.IllegalStateException: The procedure WAL relies on the
> >> >> > ability to
> >> >> > > hsync for proper operation during component failures, but the
> >> >> underlying
> >> >> > > filesystem does not support doing so. Please check the config
> value
> >> of
> >> >> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level
> of
> >> >> > > robustness and ensure the config value of 'hbase.wal.dir' points
> to
> >> a
> >> >> > > FileSystem mount that can provide it.
> >> >> > >
> >> >> >
> >> >> > This error means that you're running on top of a Filesystem that
> >> >> > doesn't provide sync.
> >> >> >
> >> >> > Are you using HDFS? What version?
> >> >> >
> >> >>
> >> >>
> >> >>
> >>
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks.

under $HBASE_HOME/lib for version 2 I swapped the phoenix client jar file
as below

phoenix-5.0.0-alpha-HBase-2.0-client.jar_ori
phoenix-4.8.1-HBase-1.2-client.jar

I then started HBASE-2 that worked fine.

For Hbase clients, i.e. the Hbase  connection from edge nodes etc, I will
keep using HBASE-1.2.6 which is the stable version and it connects
successfully to Hbase-2. This appears to be a working solution for now.

Regards

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 21:03, Sean Busbey <bu...@apache.org> wrote:

> Your current problem is caused by this phoenix jar:
>
>
> > hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
> > -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> > StreamCapabilities
> > ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
> > org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> > org/apache/hadoop/fs/StreamCapabilities.class
> > org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
>
> I don't know what version of Hadoop it's bundling or why, but it's one
> that includes the StreamCapabilities interface, so HBase takes that to
> mean it can check on capabilities. Since Hadoop 2.7 doesn't claim to
> implement any, HBase throws its hands up.
>
> I'd recommend you ask on the phoenix list how to properly install
> phoenix such that you don't need to copy the jars into the HBase
> installation. Hopefully the jar pointed out here is meant to be client
> facing only and not installed into the HBase cluster.
>
>
> On Thu, Jun 7, 2018 at 2:38 PM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> > Hi,
> >
> > Under Hbase Home directory I get
> >
> > hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
> > -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> > StreamCapabilities
> > ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
> > org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> > org/apache/hadoop/fs/StreamCapabilities.class
> > org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
> > --
> > ./lib/hbase-common-2.0.0.jar
> > org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> >
> > for Hadoop home directory I get nothing
> >
> > hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
> > -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> > StreamCapabilities
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> > On 7 June 2018 at 15:39, Sean Busbey <bu...@apache.org> wrote:
> >
> >> Somehow, HBase is getting confused by your installation and thinks it
> >> can check for wether or not the underlying FileSystem implementation
> >> (i.e. HDFS) provides hflush/hsync even though that ability is not
> >> present in Hadoop 2.7. Usually this means there's a mix of Hadoop
> >> versions on the classpath. While you do have both Hadoop 2.7.3 and
> >> 2.7.4, that mix shouldn't cause this kind of failure[1].
> >>
> >> Please run this command and copy/paste the output in your HBase and
> >> Hadoop installation directories:
> >>
> >> find . -name '*.jar' -print -exec jar tf {} \; | grep -E
> >> "\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
> >>
> >>
> >>
> >> [1]: As an aside, you should follow the guidance in our reference
> >> guide from the section "Replace the Hadoop Bundled With HBase!" in the
> >> Hadoop chapter: http://hbase.apache.org/book.html#hadoop
> >>
> >> But as I mentioned, I don't think it's the underlying cause in this
> case.
> >>
> >> On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
> >> <mi...@gmail.com> wrote:
> >> > Hi,
> >> >
> >> > Please find below
> >> >
> >> > *bin/hbase version*
> >> > SLF4J: Class path contains multiple SLF4J bindings.
> >> > SLF4J: Found binding in
> >> > [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
> >> HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> > SLF4J: Found binding in
> >> > [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
> >> 25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> > SLF4J: Found binding in
> >> > [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
> >> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> >> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> >> > explanation.
> >> > HBase 2.0.0
> >> > Source code repository git://
> >> > kalashnikov.att.net/Users/stack/checkouts/hbase.git
> >> > revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
> >> > Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
> >> > From source with checksum a59e806496ef216732e730c746bbe5ac
> >> >
> >> > *l**s -lah lib/hadoop**
> >> > -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
> >> > lib/hadoop-annotations-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29
> lib/hadoop-client-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
> >> > lib/hadoop-common-2.7.4-tests.jar
> >> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26
> lib/hadoop-common-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29
> lib/hadoop-distcp-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29
> lib/hadoop-hdfs-2.7.4-tests.
> >> jar
> >> > -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
> >> > lib/hadoop-mapreduce-client-app-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
> >> > lib/hadoop-mapreduce-client-common-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
> >> > lib/hadoop-mapreduce-client-core-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
> >> > lib/hadoop-mapreduce-client-hs-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
> >> > lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
> >> > lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
> >> > lib/hadoop-minicluster-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
> >> lib/hadoop-yarn-api-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
> >> > lib/hadoop-yarn-client-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
> >> > lib/hadoop-yarn-common-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
> >> > lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
> >> > lib/hadoop-yarn-server-common-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
> >> > lib/hadoop-yarn-server-nodemanager-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
> >> > lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
> >> > -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
> >> > lib/hadoop-yarn-server-tests-2.7.4-tests.jar
> >> > -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
> >> > lib/hadoop-yarn-server-web-proxy-2.7.4.jar
> >> >
> >> > Also I am on Hadoop 2.7.3
> >> >
> >> > *hadoop version*
> >> > Hadoop 2.7.3
> >> > Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> >> > baa91f7c6bc9cb92be5982de4719c1c8af91ccff
> >> > Compiled by root on 2016-08-18T01:41Z
> >> > Compiled with protoc 2.5.0
> >> > From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
> >> > This command was run using
> >> > /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
> >> >
> >> >
> >> > Dr Mich Talebzadeh
> >> >
> >> >
> >> >
> >> > LinkedIn * https://www.linkedin.com/profile/view?id=
> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >> > <https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> >> OABUrV8Pw>*
> >> >
> >> >
> >> >
> >> > http://talebzadehmich.wordpress.com
> >> >
> >> >
> >> > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >> > loss, damage or destruction of data or any other property which may
> arise
> >> > from relying on this email's technical content is explicitly
> disclaimed.
> >> > The author will in no case be liable for any monetary damages arising
> >> from
> >> > such loss, damage or destruction.
> >> >
> >> >
> >> >
> >> > On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
> >> >
> >> >> HBase needs HDFS syncs to avoid dataloss during component failure.
> >> >>
> >> >> What's the output of the command "bin/hbase version"?
> >> >>
> >> >>
> >> >> What's the result of doing the following in the hbase install?
> >> >>
> >> >> ls -lah lib/hadoop*
> >> >>
> >> >> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com>
> >> wrote:
> >> >>
> >> >> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
> >> >>
> >> >> The file system is ext4.
> >> >>
> >> >> I was hoping that I can avoid the sync option,
> >> >>
> >> >> many thanks
> >> >>
> >> >>
> >> >>
> >> >> Dr Mich Talebzadeh
> >> >>
> >> >>
> >> >>
> >> >> LinkedIn * https://www.linkedin.com/profile/view?id=
> >> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >> >> <https://www.linkedin.com/profile/view?id=
> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> >> >> OABUrV8Pw>*
> >> >>
> >> >>
> >> >>
> >> >> http://talebzadehmich.wordpress.com
> >> >>
> >> >>
> >> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> >> any
> >> >> loss, damage or destruction of data or any other property which may
> >> arise
> >> >> from relying on this email's technical content is explicitly
> disclaimed.
> >> >> The author will in no case be liable for any monetary damages arising
> >> from
> >> >> such loss, damage or destruction.
> >> >>
> >> >>
> >> >>
> >> >> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
> >> >>
> >> >> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> >> >> > <mi...@gmail.com> wrote:
> >> >> > >
> >> >> > >
> >> >> > > so the region server started OK but then I had a problem with
> >> master :(
> >> >> > >
> >> >> > > java.lang.IllegalStateException: The procedure WAL relies on the
> >> >> > ability to
> >> >> > > hsync for proper operation during component failures, but the
> >> >> underlying
> >> >> > > filesystem does not support doing so. Please check the config
> value
> >> of
> >> >> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level
> of
> >> >> > > robustness and ensure the config value of 'hbase.wal.dir' points
> to
> >> a
> >> >> > > FileSystem mount that can provide it.
> >> >> > >
> >> >> >
> >> >> > This error means that you're running on top of a Filesystem that
> >> >> > doesn't provide sync.
> >> >> >
> >> >> > Are you using HDFS? What version?
> >> >> >
> >> >>
> >> >>
> >> >>
> >>
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <bu...@apache.org>.
Your current problem is caused by this phoenix jar:


> hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> StreamCapabilities
> ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> org/apache/hadoop/fs/StreamCapabilities.class
> org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class

I don't know what version of Hadoop it's bundling or why, but it's one
that includes the StreamCapabilities interface, so HBase takes that to
mean it can check on capabilities. Since Hadoop 2.7 doesn't claim to
implement any, HBase throws its hands up.

I'd recommend you ask on the phoenix list how to properly install
phoenix such that you don't need to copy the jars into the HBase
installation. Hopefully the jar pointed out here is meant to be client
facing only and not installed into the HBase cluster.


On Thu, Jun 7, 2018 at 2:38 PM, Mich Talebzadeh
<mi...@gmail.com> wrote:
> Hi,
>
> Under Hbase Home directory I get
>
> hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> StreamCapabilities
> ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> org/apache/hadoop/fs/StreamCapabilities.class
> org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
> --
> ./lib/hbase-common-2.0.0.jar
> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
>
> for Hadoop home directory I get nothing
>
> hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> StreamCapabilities
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 7 June 2018 at 15:39, Sean Busbey <bu...@apache.org> wrote:
>
>> Somehow, HBase is getting confused by your installation and thinks it
>> can check for wether or not the underlying FileSystem implementation
>> (i.e. HDFS) provides hflush/hsync even though that ability is not
>> present in Hadoop 2.7. Usually this means there's a mix of Hadoop
>> versions on the classpath. While you do have both Hadoop 2.7.3 and
>> 2.7.4, that mix shouldn't cause this kind of failure[1].
>>
>> Please run this command and copy/paste the output in your HBase and
>> Hadoop installation directories:
>>
>> find . -name '*.jar' -print -exec jar tf {} \; | grep -E
>> "\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
>>
>>
>>
>> [1]: As an aside, you should follow the guidance in our reference
>> guide from the section "Replace the Hadoop Bundled With HBase!" in the
>> Hadoop chapter: http://hbase.apache.org/book.html#hadoop
>>
>> But as I mentioned, I don't think it's the underlying cause in this case.
>>
>> On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
>> <mi...@gmail.com> wrote:
>> > Hi,
>> >
>> > Please find below
>> >
>> > *bin/hbase version*
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> > [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
>> HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> > [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
>> 25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> > [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
>> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > HBase 2.0.0
>> > Source code repository git://
>> > kalashnikov.att.net/Users/stack/checkouts/hbase.git
>> > revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
>> > Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
>> > From source with checksum a59e806496ef216732e730c746bbe5ac
>> >
>> > *l**s -lah lib/hadoop**
>> > -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
>> > lib/hadoop-annotations-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29 lib/hadoop-client-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
>> > lib/hadoop-common-2.7.4-tests.jar
>> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26 lib/hadoop-common-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29 lib/hadoop-distcp-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29 lib/hadoop-hdfs-2.7.4-tests.
>> jar
>> > -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
>> > lib/hadoop-mapreduce-client-app-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
>> > lib/hadoop-mapreduce-client-common-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
>> > lib/hadoop-mapreduce-client-core-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
>> > lib/hadoop-mapreduce-client-hs-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
>> > lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
>> > lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
>> > lib/hadoop-minicluster-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
>> lib/hadoop-yarn-api-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
>> > lib/hadoop-yarn-client-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
>> > lib/hadoop-yarn-common-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
>> > lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
>> > lib/hadoop-yarn-server-common-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
>> > lib/hadoop-yarn-server-nodemanager-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
>> > lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
>> > lib/hadoop-yarn-server-tests-2.7.4-tests.jar
>> > -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
>> > lib/hadoop-yarn-server-web-proxy-2.7.4.jar
>> >
>> > Also I am on Hadoop 2.7.3
>> >
>> > *hadoop version*
>> > Hadoop 2.7.3
>> > Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>> > baa91f7c6bc9cb92be5982de4719c1c8af91ccff
>> > Compiled by root on 2016-08-18T01:41Z
>> > Compiled with protoc 2.5.0
>> > From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
>> > This command was run using
>> > /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
>> >
>> >
>> > Dr Mich Talebzadeh
>> >
>> >
>> >
>> > LinkedIn * https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>> OABUrV8Pw>*
>> >
>> >
>> >
>> > http://talebzadehmich.wordpress.com
>> >
>> >
>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>> > loss, damage or destruction of data or any other property which may arise
>> > from relying on this email's technical content is explicitly disclaimed.
>> > The author will in no case be liable for any monetary damages arising
>> from
>> > such loss, damage or destruction.
>> >
>> >
>> >
>> > On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
>> >
>> >> HBase needs HDFS syncs to avoid dataloss during component failure.
>> >>
>> >> What's the output of the command "bin/hbase version"?
>> >>
>> >>
>> >> What's the result of doing the following in the hbase install?
>> >>
>> >> ls -lah lib/hadoop*
>> >>
>> >> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com>
>> wrote:
>> >>
>> >> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
>> >>
>> >> The file system is ext4.
>> >>
>> >> I was hoping that I can avoid the sync option,
>> >>
>> >> many thanks
>> >>
>> >>
>> >>
>> >> Dr Mich Talebzadeh
>> >>
>> >>
>> >>
>> >> LinkedIn * https://www.linkedin.com/profile/view?id=
>> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >> <https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>> >> OABUrV8Pw>*
>> >>
>> >>
>> >>
>> >> http://talebzadehmich.wordpress.com
>> >>
>> >>
>> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any
>> >> loss, damage or destruction of data or any other property which may
>> arise
>> >> from relying on this email's technical content is explicitly disclaimed.
>> >> The author will in no case be liable for any monetary damages arising
>> from
>> >> such loss, damage or destruction.
>> >>
>> >>
>> >>
>> >> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
>> >>
>> >> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
>> >> > <mi...@gmail.com> wrote:
>> >> > >
>> >> > >
>> >> > > so the region server started OK but then I had a problem with
>> master :(
>> >> > >
>> >> > > java.lang.IllegalStateException: The procedure WAL relies on the
>> >> > ability to
>> >> > > hsync for proper operation during component failures, but the
>> >> underlying
>> >> > > filesystem does not support doing so. Please check the config value
>> of
>> >> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
>> >> > > robustness and ensure the config value of 'hbase.wal.dir' points to
>> a
>> >> > > FileSystem mount that can provide it.
>> >> > >
>> >> >
>> >> > This error means that you're running on top of a Filesystem that
>> >> > doesn't provide sync.
>> >> >
>> >> > Are you using HDFS? What version?
>> >> >
>> >>
>> >>
>> >>
>>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <bu...@apache.org>.
Your current problem is caused by this phoenix jar:


> hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> StreamCapabilities
> ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> org/apache/hadoop/fs/StreamCapabilities.class
> org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class

I don't know what version of Hadoop it's bundling or why, but it's one
that includes the StreamCapabilities interface, so HBase takes that to
mean it can check on capabilities. Since Hadoop 2.7 doesn't claim to
implement any, HBase throws its hands up.

I'd recommend you ask on the phoenix list how to properly install
phoenix such that you don't need to copy the jars into the HBase
installation. Hopefully the jar pointed out here is meant to be client
facing only and not installed into the HBase cluster.


On Thu, Jun 7, 2018 at 2:38 PM, Mich Talebzadeh
<mi...@gmail.com> wrote:
> Hi,
>
> Under Hbase Home directory I get
>
> hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> StreamCapabilities
> ./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
> org/apache/hadoop/fs/StreamCapabilities.class
> org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
> --
> ./lib/hbase-common-2.0.0.jar
> org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
>
> for Hadoop home directory I get nothing
>
> hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
> -exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
> StreamCapabilities
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 7 June 2018 at 15:39, Sean Busbey <bu...@apache.org> wrote:
>
>> Somehow, HBase is getting confused by your installation and thinks it
>> can check for wether or not the underlying FileSystem implementation
>> (i.e. HDFS) provides hflush/hsync even though that ability is not
>> present in Hadoop 2.7. Usually this means there's a mix of Hadoop
>> versions on the classpath. While you do have both Hadoop 2.7.3 and
>> 2.7.4, that mix shouldn't cause this kind of failure[1].
>>
>> Please run this command and copy/paste the output in your HBase and
>> Hadoop installation directories:
>>
>> find . -name '*.jar' -print -exec jar tf {} \; | grep -E
>> "\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
>>
>>
>>
>> [1]: As an aside, you should follow the guidance in our reference
>> guide from the section "Replace the Hadoop Bundled With HBase!" in the
>> Hadoop chapter: http://hbase.apache.org/book.html#hadoop
>>
>> But as I mentioned, I don't think it's the underlying cause in this case.
>>
>> On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
>> <mi...@gmail.com> wrote:
>> > Hi,
>> >
>> > Please find below
>> >
>> > *bin/hbase version*
>> > SLF4J: Class path contains multiple SLF4J bindings.
>> > SLF4J: Found binding in
>> > [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
>> HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> > [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
>> 25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: Found binding in
>> > [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
>> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
>> > explanation.
>> > HBase 2.0.0
>> > Source code repository git://
>> > kalashnikov.att.net/Users/stack/checkouts/hbase.git
>> > revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
>> > Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
>> > From source with checksum a59e806496ef216732e730c746bbe5ac
>> >
>> > *l**s -lah lib/hadoop**
>> > -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
>> > lib/hadoop-annotations-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29 lib/hadoop-client-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
>> > lib/hadoop-common-2.7.4-tests.jar
>> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26 lib/hadoop-common-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29 lib/hadoop-distcp-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29 lib/hadoop-hdfs-2.7.4-tests.
>> jar
>> > -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
>> > lib/hadoop-mapreduce-client-app-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
>> > lib/hadoop-mapreduce-client-common-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
>> > lib/hadoop-mapreduce-client-core-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
>> > lib/hadoop-mapreduce-client-hs-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
>> > lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
>> > lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
>> > lib/hadoop-minicluster-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
>> lib/hadoop-yarn-api-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
>> > lib/hadoop-yarn-client-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
>> > lib/hadoop-yarn-common-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
>> > lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
>> > lib/hadoop-yarn-server-common-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
>> > lib/hadoop-yarn-server-nodemanager-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
>> > lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
>> > -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
>> > lib/hadoop-yarn-server-tests-2.7.4-tests.jar
>> > -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
>> > lib/hadoop-yarn-server-web-proxy-2.7.4.jar
>> >
>> > Also I am on Hadoop 2.7.3
>> >
>> > *hadoop version*
>> > Hadoop 2.7.3
>> > Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
>> > baa91f7c6bc9cb92be5982de4719c1c8af91ccff
>> > Compiled by root on 2016-08-18T01:41Z
>> > Compiled with protoc 2.5.0
>> > From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
>> > This command was run using
>> > /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
>> >
>> >
>> > Dr Mich Talebzadeh
>> >
>> >
>> >
>> > LinkedIn * https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>> OABUrV8Pw>*
>> >
>> >
>> >
>> > http://talebzadehmich.wordpress.com
>> >
>> >
>> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>> > loss, damage or destruction of data or any other property which may arise
>> > from relying on this email's technical content is explicitly disclaimed.
>> > The author will in no case be liable for any monetary damages arising
>> from
>> > such loss, damage or destruction.
>> >
>> >
>> >
>> > On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
>> >
>> >> HBase needs HDFS syncs to avoid dataloss during component failure.
>> >>
>> >> What's the output of the command "bin/hbase version"?
>> >>
>> >>
>> >> What's the result of doing the following in the hbase install?
>> >>
>> >> ls -lah lib/hadoop*
>> >>
>> >> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com>
>> wrote:
>> >>
>> >> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
>> >>
>> >> The file system is ext4.
>> >>
>> >> I was hoping that I can avoid the sync option,
>> >>
>> >> many thanks
>> >>
>> >>
>> >>
>> >> Dr Mich Talebzadeh
>> >>
>> >>
>> >>
>> >> LinkedIn * https://www.linkedin.com/profile/view?id=
>> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> >> <https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>> >> OABUrV8Pw>*
>> >>
>> >>
>> >>
>> >> http://talebzadehmich.wordpress.com
>> >>
>> >>
>> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any
>> >> loss, damage or destruction of data or any other property which may
>> arise
>> >> from relying on this email's technical content is explicitly disclaimed.
>> >> The author will in no case be liable for any monetary damages arising
>> from
>> >> such loss, damage or destruction.
>> >>
>> >>
>> >>
>> >> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
>> >>
>> >> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
>> >> > <mi...@gmail.com> wrote:
>> >> > >
>> >> > >
>> >> > > so the region server started OK but then I had a problem with
>> master :(
>> >> > >
>> >> > > java.lang.IllegalStateException: The procedure WAL relies on the
>> >> > ability to
>> >> > > hsync for proper operation during component failures, but the
>> >> underlying
>> >> > > filesystem does not support doing so. Please check the config value
>> of
>> >> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
>> >> > > robustness and ensure the config value of 'hbase.wal.dir' points to
>> a
>> >> > > FileSystem mount that can provide it.
>> >> > >
>> >> >
>> >> > This error means that you're running on top of a Filesystem that
>> >> > doesn't provide sync.
>> >> >
>> >> > Are you using HDFS? What version?
>> >> >
>> >>
>> >>
>> >>
>>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

Under Hbase Home directory I get

hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities
./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
org/apache/hadoop/fs/StreamCapabilities.class
org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
--
./lib/hbase-common-2.0.0.jar
org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class

for Hadoop home directory I get nothing

hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 15:39, Sean Busbey <bu...@apache.org> wrote:

> Somehow, HBase is getting confused by your installation and thinks it
> can check for wether or not the underlying FileSystem implementation
> (i.e. HDFS) provides hflush/hsync even though that ability is not
> present in Hadoop 2.7. Usually this means there's a mix of Hadoop
> versions on the classpath. While you do have both Hadoop 2.7.3 and
> 2.7.4, that mix shouldn't cause this kind of failure[1].
>
> Please run this command and copy/paste the output in your HBase and
> Hadoop installation directories:
>
> find . -name '*.jar' -print -exec jar tf {} \; | grep -E
> "\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
>
>
>
> [1]: As an aside, you should follow the guidance in our reference
> guide from the section "Replace the Hadoop Bundled With HBase!" in the
> Hadoop chapter: http://hbase.apache.org/book.html#hadoop
>
> But as I mentioned, I don't think it's the underlying cause in this case.
>
> On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> > Hi,
> >
> > Please find below
> >
> > *bin/hbase version*
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> > [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
> HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> > [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
> 25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> > [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > HBase 2.0.0
> > Source code repository git://
> > kalashnikov.att.net/Users/stack/checkouts/hbase.git
> > revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
> > Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
> > From source with checksum a59e806496ef216732e730c746bbe5ac
> >
> > *l**s -lah lib/hadoop**
> > -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
> > lib/hadoop-annotations-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29 lib/hadoop-client-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
> > lib/hadoop-common-2.7.4-tests.jar
> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26 lib/hadoop-common-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29 lib/hadoop-distcp-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29 lib/hadoop-hdfs-2.7.4-tests.
> jar
> > -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
> > lib/hadoop-mapreduce-client-app-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
> > lib/hadoop-mapreduce-client-common-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
> > lib/hadoop-mapreduce-client-core-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
> > lib/hadoop-mapreduce-client-hs-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
> > lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
> > lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
> > lib/hadoop-minicluster-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
> lib/hadoop-yarn-api-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
> > lib/hadoop-yarn-client-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
> > lib/hadoop-yarn-common-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
> > lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
> > lib/hadoop-yarn-server-common-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
> > lib/hadoop-yarn-server-nodemanager-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
> > lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
> > lib/hadoop-yarn-server-tests-2.7.4-tests.jar
> > -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
> > lib/hadoop-yarn-server-web-proxy-2.7.4.jar
> >
> > Also I am on Hadoop 2.7.3
> >
> > *hadoop version*
> > Hadoop 2.7.3
> > Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> > baa91f7c6bc9cb92be5982de4719c1c8af91ccff
> > Compiled by root on 2016-08-18T01:41Z
> > Compiled with protoc 2.5.0
> > From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
> > This command was run using
> > /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> > On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
> >
> >> HBase needs HDFS syncs to avoid dataloss during component failure.
> >>
> >> What's the output of the command "bin/hbase version"?
> >>
> >>
> >> What's the result of doing the following in the hbase install?
> >>
> >> ls -lah lib/hadoop*
> >>
> >> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com>
> wrote:
> >>
> >> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
> >>
> >> The file system is ext4.
> >>
> >> I was hoping that I can avoid the sync option,
> >>
> >> many thanks
> >>
> >>
> >>
> >> Dr Mich Talebzadeh
> >>
> >>
> >>
> >> LinkedIn * https://www.linkedin.com/profile/view?id=
> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >> <https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> >> OABUrV8Pw>*
> >>
> >>
> >>
> >> http://talebzadehmich.wordpress.com
> >>
> >>
> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >> loss, damage or destruction of data or any other property which may
> arise
> >> from relying on this email's technical content is explicitly disclaimed.
> >> The author will in no case be liable for any monetary damages arising
> from
> >> such loss, damage or destruction.
> >>
> >>
> >>
> >> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
> >>
> >> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> >> > <mi...@gmail.com> wrote:
> >> > >
> >> > >
> >> > > so the region server started OK but then I had a problem with
> master :(
> >> > >
> >> > > java.lang.IllegalStateException: The procedure WAL relies on the
> >> > ability to
> >> > > hsync for proper operation during component failures, but the
> >> underlying
> >> > > filesystem does not support doing so. Please check the config value
> of
> >> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> >> > > robustness and ensure the config value of 'hbase.wal.dir' points to
> a
> >> > > FileSystem mount that can provide it.
> >> > >
> >> >
> >> > This error means that you're running on top of a Filesystem that
> >> > doesn't provide sync.
> >> >
> >> > Are you using HDFS? What version?
> >> >
> >>
> >>
> >>
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

Under Hbase Home directory I get

hduser@rhes75: /data6/hduser/hbase-2.0.0> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities
./lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar
org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class
org/apache/hadoop/fs/StreamCapabilities.class
org/apache/hadoop/fs/StreamCapabilities$StreamCapability.class
--
./lib/hbase-common-2.0.0.jar
org/apache/hadoop/hbase/util/CommonFSUtils$StreamCapabilities.class

for Hadoop home directory I get nothing

hduser@rhes75: /home/hduser/hadoop-2.7.3> find ./ -name '*.jar' -print
-exec jar tf {} \; | grep -E "\.jar$|StreamCapabilities" | grep -B 1
StreamCapabilities


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 15:39, Sean Busbey <bu...@apache.org> wrote:

> Somehow, HBase is getting confused by your installation and thinks it
> can check for wether or not the underlying FileSystem implementation
> (i.e. HDFS) provides hflush/hsync even though that ability is not
> present in Hadoop 2.7. Usually this means there's a mix of Hadoop
> versions on the classpath. While you do have both Hadoop 2.7.3 and
> 2.7.4, that mix shouldn't cause this kind of failure[1].
>
> Please run this command and copy/paste the output in your HBase and
> Hadoop installation directories:
>
> find . -name '*.jar' -print -exec jar tf {} \; | grep -E
> "\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities
>
>
>
> [1]: As an aside, you should follow the guidance in our reference
> guide from the section "Replace the Hadoop Bundled With HBase!" in the
> Hadoop chapter: http://hbase.apache.org/book.html#hadoop
>
> But as I mentioned, I don't think it's the underlying cause in this case.
>
> On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> > Hi,
> >
> > Please find below
> >
> > *bin/hbase version*
> > SLF4J: Class path contains multiple SLF4J bindings.
> > SLF4J: Found binding in
> > [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-
> HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> > [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.
> 25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: Found binding in
> > [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/
> slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> > SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> > explanation.
> > HBase 2.0.0
> > Source code repository git://
> > kalashnikov.att.net/Users/stack/checkouts/hbase.git
> > revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
> > Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
> > From source with checksum a59e806496ef216732e730c746bbe5ac
> >
> > *l**s -lah lib/hadoop**
> > -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
> > lib/hadoop-annotations-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29 lib/hadoop-client-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
> > lib/hadoop-common-2.7.4-tests.jar
> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26 lib/hadoop-common-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29 lib/hadoop-distcp-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29 lib/hadoop-hdfs-2.7.4-tests.
> jar
> > -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
> > lib/hadoop-mapreduce-client-app-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
> > lib/hadoop-mapreduce-client-common-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
> > lib/hadoop-mapreduce-client-core-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
> > lib/hadoop-mapreduce-client-hs-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
> > lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
> > lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
> > lib/hadoop-minicluster-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27
> lib/hadoop-yarn-api-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
> > lib/hadoop-yarn-client-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
> > lib/hadoop-yarn-common-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
> > lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
> > lib/hadoop-yarn-server-common-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
> > lib/hadoop-yarn-server-nodemanager-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
> > lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
> > -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
> > lib/hadoop-yarn-server-tests-2.7.4-tests.jar
> > -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
> > lib/hadoop-yarn-server-web-proxy-2.7.4.jar
> >
> > Also I am on Hadoop 2.7.3
> >
> > *hadoop version*
> > Hadoop 2.7.3
> > Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> > baa91f7c6bc9cb92be5982de4719c1c8af91ccff
> > Compiled by root on 2016-08-18T01:41Z
> > Compiled with protoc 2.5.0
> > From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
> > This command was run using
> > /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> > On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
> >
> >> HBase needs HDFS syncs to avoid dataloss during component failure.
> >>
> >> What's the output of the command "bin/hbase version"?
> >>
> >>
> >> What's the result of doing the following in the hbase install?
> >>
> >> ls -lah lib/hadoop*
> >>
> >> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com>
> wrote:
> >>
> >> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
> >>
> >> The file system is ext4.
> >>
> >> I was hoping that I can avoid the sync option,
> >>
> >> many thanks
> >>
> >>
> >>
> >> Dr Mich Talebzadeh
> >>
> >>
> >>
> >> LinkedIn * https://www.linkedin.com/profile/view?id=
> >> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >> <https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> >> OABUrV8Pw>*
> >>
> >>
> >>
> >> http://talebzadehmich.wordpress.com
> >>
> >>
> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> >> loss, damage or destruction of data or any other property which may
> arise
> >> from relying on this email's technical content is explicitly disclaimed.
> >> The author will in no case be liable for any monetary damages arising
> from
> >> such loss, damage or destruction.
> >>
> >>
> >>
> >> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
> >>
> >> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> >> > <mi...@gmail.com> wrote:
> >> > >
> >> > >
> >> > > so the region server started OK but then I had a problem with
> master :(
> >> > >
> >> > > java.lang.IllegalStateException: The procedure WAL relies on the
> >> > ability to
> >> > > hsync for proper operation during component failures, but the
> >> underlying
> >> > > filesystem does not support doing so. Please check the config value
> of
> >> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> >> > > robustness and ensure the config value of 'hbase.wal.dir' points to
> a
> >> > > FileSystem mount that can provide it.
> >> > >
> >> >
> >> > This error means that you're running on top of a Filesystem that
> >> > doesn't provide sync.
> >> >
> >> > Are you using HDFS? What version?
> >> >
> >>
> >>
> >>
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <bu...@apache.org>.
Somehow, HBase is getting confused by your installation and thinks it
can check for wether or not the underlying FileSystem implementation
(i.e. HDFS) provides hflush/hsync even though that ability is not
present in Hadoop 2.7. Usually this means there's a mix of Hadoop
versions on the classpath. While you do have both Hadoop 2.7.3 and
2.7.4, that mix shouldn't cause this kind of failure[1].

Please run this command and copy/paste the output in your HBase and
Hadoop installation directories:

find . -name '*.jar' -print -exec jar tf {} \; | grep -E
"\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities



[1]: As an aside, you should follow the guidance in our reference
guide from the section "Replace the Hadoop Bundled With HBase!" in the
Hadoop chapter: http://hbase.apache.org/book.html#hadoop

But as I mentioned, I don't think it's the underlying cause in this case.

On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
<mi...@gmail.com> wrote:
> Hi,
>
> Please find below
>
> *bin/hbase version*
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> HBase 2.0.0
> Source code repository git://
> kalashnikov.att.net/Users/stack/checkouts/hbase.git
> revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
> Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
> From source with checksum a59e806496ef216732e730c746bbe5ac
>
> *l**s -lah lib/hadoop**
> -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
> lib/hadoop-annotations-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29 lib/hadoop-client-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
> lib/hadoop-common-2.7.4-tests.jar
> -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26 lib/hadoop-common-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29 lib/hadoop-distcp-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29 lib/hadoop-hdfs-2.7.4-tests.jar
> -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
> lib/hadoop-mapreduce-client-app-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
> lib/hadoop-mapreduce-client-common-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
> lib/hadoop-mapreduce-client-core-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
> lib/hadoop-mapreduce-client-hs-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
> lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
> lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
> lib/hadoop-minicluster-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27 lib/hadoop-yarn-api-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
> lib/hadoop-yarn-client-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
> lib/hadoop-yarn-common-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
> lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
> lib/hadoop-yarn-server-common-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
> lib/hadoop-yarn-server-nodemanager-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
> lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
> lib/hadoop-yarn-server-tests-2.7.4-tests.jar
> -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
> lib/hadoop-yarn-server-web-proxy-2.7.4.jar
>
> Also I am on Hadoop 2.7.3
>
> *hadoop version*
> Hadoop 2.7.3
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> baa91f7c6bc9cb92be5982de4719c1c8af91ccff
> Compiled by root on 2016-08-18T01:41Z
> Compiled with protoc 2.5.0
> From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
> This command was run using
> /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
>
>> HBase needs HDFS syncs to avoid dataloss during component failure.
>>
>> What's the output of the command "bin/hbase version"?
>>
>>
>> What's the result of doing the following in the hbase install?
>>
>> ls -lah lib/hadoop*
>>
>> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com> wrote:
>>
>> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
>>
>> The file system is ext4.
>>
>> I was hoping that I can avoid the sync option,
>>
>> many thanks
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>> OABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>> loss, damage or destruction of data or any other property which may arise
>> from relying on this email's technical content is explicitly disclaimed.
>> The author will in no case be liable for any monetary damages arising from
>> such loss, damage or destruction.
>>
>>
>>
>> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
>>
>> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
>> > <mi...@gmail.com> wrote:
>> > >
>> > >
>> > > so the region server started OK but then I had a problem with master :(
>> > >
>> > > java.lang.IllegalStateException: The procedure WAL relies on the
>> > ability to
>> > > hsync for proper operation during component failures, but the
>> underlying
>> > > filesystem does not support doing so. Please check the config value of
>> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
>> > > robustness and ensure the config value of 'hbase.wal.dir' points to a
>> > > FileSystem mount that can provide it.
>> > >
>> >
>> > This error means that you're running on top of a Filesystem that
>> > doesn't provide sync.
>> >
>> > Are you using HDFS? What version?
>> >
>>
>>
>>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <bu...@apache.org>.
Somehow, HBase is getting confused by your installation and thinks it
can check for wether or not the underlying FileSystem implementation
(i.e. HDFS) provides hflush/hsync even though that ability is not
present in Hadoop 2.7. Usually this means there's a mix of Hadoop
versions on the classpath. While you do have both Hadoop 2.7.3 and
2.7.4, that mix shouldn't cause this kind of failure[1].

Please run this command and copy/paste the output in your HBase and
Hadoop installation directories:

find . -name '*.jar' -print -exec jar tf {} \; | grep -E
"\.jar$|StreamCapabilities" | grep -B 1 StreamCapabilities



[1]: As an aside, you should follow the guidance in our reference
guide from the section "Replace the Hadoop Bundled With HBase!" in the
Hadoop chapter: http://hbase.apache.org/book.html#hadoop

But as I mentioned, I don't think it's the underlying cause in this case.

On Thu, Jun 7, 2018 at 8:41 AM, Mich Talebzadeh
<mi...@gmail.com> wrote:
> Hi,
>
> Please find below
>
> *bin/hbase version*
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> HBase 2.0.0
> Source code repository git://
> kalashnikov.att.net/Users/stack/checkouts/hbase.git
> revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
> Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
> From source with checksum a59e806496ef216732e730c746bbe5ac
>
> *l**s -lah lib/hadoop**
> -rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
> lib/hadoop-annotations-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29 lib/hadoop-client-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
> lib/hadoop-common-2.7.4-tests.jar
> -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26 lib/hadoop-common-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29 lib/hadoop-distcp-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29 lib/hadoop-hdfs-2.7.4-tests.jar
> -rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
> lib/hadoop-mapreduce-client-app-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
> lib/hadoop-mapreduce-client-common-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
> lib/hadoop-mapreduce-client-core-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
> lib/hadoop-mapreduce-client-hs-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
> lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
> lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
> lib/hadoop-minicluster-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27 lib/hadoop-yarn-api-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
> lib/hadoop-yarn-client-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
> lib/hadoop-yarn-common-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
> lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
> lib/hadoop-yarn-server-common-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
> lib/hadoop-yarn-server-nodemanager-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
> lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
> -rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
> lib/hadoop-yarn-server-tests-2.7.4-tests.jar
> -rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
> lib/hadoop-yarn-server-web-proxy-2.7.4.jar
>
> Also I am on Hadoop 2.7.3
>
> *hadoop version*
> Hadoop 2.7.3
> Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
> baa91f7c6bc9cb92be5982de4719c1c8af91ccff
> Compiled by root on 2016-08-18T01:41Z
> Compiled with protoc 2.5.0
> From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
> This command was run using
> /home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:
>
>> HBase needs HDFS syncs to avoid dataloss during component failure.
>>
>> What's the output of the command "bin/hbase version"?
>>
>>
>> What's the result of doing the following in the hbase install?
>>
>> ls -lah lib/hadoop*
>>
>> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com> wrote:
>>
>> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
>>
>> The file system is ext4.
>>
>> I was hoping that I can avoid the sync option,
>>
>> many thanks
>>
>>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=
>> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
>> OABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
>> loss, damage or destruction of data or any other property which may arise
>> from relying on this email's technical content is explicitly disclaimed.
>> The author will in no case be liable for any monetary damages arising from
>> such loss, damage or destruction.
>>
>>
>>
>> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
>>
>> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
>> > <mi...@gmail.com> wrote:
>> > >
>> > >
>> > > so the region server started OK but then I had a problem with master :(
>> > >
>> > > java.lang.IllegalStateException: The procedure WAL relies on the
>> > ability to
>> > > hsync for proper operation during component failures, but the
>> underlying
>> > > filesystem does not support doing so. Please check the config value of
>> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
>> > > robustness and ensure the config value of 'hbase.wal.dir' points to a
>> > > FileSystem mount that can provide it.
>> > >
>> >
>> > This error means that you're running on top of a Filesystem that
>> > doesn't provide sync.
>> >
>> > Are you using HDFS? What version?
>> >
>>
>>
>>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

Please find below

*bin/hbase version*
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
HBase 2.0.0
Source code repository git://
kalashnikov.att.net/Users/stack/checkouts/hbase.git
revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
From source with checksum a59e806496ef216732e730c746bbe5ac

*l**s -lah lib/hadoop**
-rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
lib/hadoop-annotations-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29 lib/hadoop-client-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
lib/hadoop-common-2.7.4-tests.jar
-rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26 lib/hadoop-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29 lib/hadoop-distcp-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29 lib/hadoop-hdfs-2.7.4-tests.jar
-rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
lib/hadoop-mapreduce-client-app-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
lib/hadoop-mapreduce-client-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
lib/hadoop-mapreduce-client-core-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
lib/hadoop-mapreduce-client-hs-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
lib/hadoop-minicluster-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27 lib/hadoop-yarn-api-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
lib/hadoop-yarn-client-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
lib/hadoop-yarn-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
lib/hadoop-yarn-server-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
lib/hadoop-yarn-server-nodemanager-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
lib/hadoop-yarn-server-tests-2.7.4-tests.jar
-rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
lib/hadoop-yarn-server-web-proxy-2.7.4.jar

Also I am on Hadoop 2.7.3

*hadoop version*
Hadoop 2.7.3
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using
/home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:

> HBase needs HDFS syncs to avoid dataloss during component failure.
>
> What's the output of the command "bin/hbase version"?
>
>
> What's the result of doing the following in the hbase install?
>
> ls -lah lib/hadoop*
>
> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com> wrote:
>
> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
>
> The file system is ext4.
>
> I was hoping that I can avoid the sync option,
>
> many thanks
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
>
> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> > <mi...@gmail.com> wrote:
> > >
> > >
> > > so the region server started OK but then I had a problem with master :(
> > >
> > > java.lang.IllegalStateException: The procedure WAL relies on the
> > ability to
> > > hsync for proper operation during component failures, but the
> underlying
> > > filesystem does not support doing so. Please check the config value of
> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> > > robustness and ensure the config value of 'hbase.wal.dir' points to a
> > > FileSystem mount that can provide it.
> > >
> >
> > This error means that you're running on top of a Filesystem that
> > doesn't provide sync.
> >
> > Are you using HDFS? What version?
> >
>
>
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

Please find below

*bin/hbase version*
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/data6/hduser/hbase-2.0.0/lib/phoenix-5.0.0-alpha-HBase-2.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/data6/hduser/hbase-2.0.0/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/home/hduser/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
HBase 2.0.0
Source code repository git://
kalashnikov.att.net/Users/stack/checkouts/hbase.git
revision=7483b111e4da77adbfc8062b3b22cbe7c2cb91c1
Compiled by stack on Sun Apr 22 20:26:55 PDT 2018
From source with checksum a59e806496ef216732e730c746bbe5ac

*l**s -lah lib/hadoop**
-rw-r--r-- 1 hduser hadoop  41K Apr 23 04:26
lib/hadoop-annotations-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  93K Apr 23 04:26 lib/hadoop-auth-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  26K Apr 23 04:29 lib/hadoop-client-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.9M Apr 23 04:28
lib/hadoop-common-2.7.4-tests.jar
-rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:26 lib/hadoop-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 127K Apr 23 04:29 lib/hadoop-distcp-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 3.4M Apr 23 04:29 lib/hadoop-hdfs-2.7.4-tests.jar
-rw-r--r-- 1 hduser hadoop 8.0M Apr 23 04:29 lib/hadoop-hdfs-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 532K Apr 23 04:29
lib/hadoop-mapreduce-client-app-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 759K Apr 23 04:29
lib/hadoop-mapreduce-client-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.5M Apr 23 04:27
lib/hadoop-mapreduce-client-core-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 188K Apr 23 04:29
lib/hadoop-mapreduce-client-hs-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  62K Apr 23 04:29
lib/hadoop-mapreduce-client-jobclient-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  71K Apr 23 04:28
lib/hadoop-mapreduce-client-shuffle-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  26K Apr 23 04:28
lib/hadoop-minicluster-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 2.0M Apr 23 04:27 lib/hadoop-yarn-api-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 163K Apr 23 04:28
lib/hadoop-yarn-client-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.7M Apr 23 04:27
lib/hadoop-yarn-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 216K Apr 23 04:28
lib/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 380K Apr 23 04:28
lib/hadoop-yarn-server-common-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 703K Apr 23 04:28
lib/hadoop-yarn-server-nodemanager-2.7.4.jar
-rw-r--r-- 1 hduser hadoop 1.3M Apr 23 04:29
lib/hadoop-yarn-server-resourcemanager-2.7.4.jar
-rw-r--r-- 1 hduser hadoop  75K Apr 23 04:28
lib/hadoop-yarn-server-tests-2.7.4-tests.jar
-rw-r--r-- 1 hduser hadoop  58K Apr 23 04:29
lib/hadoop-yarn-server-web-proxy-2.7.4.jar

Also I am on Hadoop 2.7.3

*hadoop version*
Hadoop 2.7.3
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
baa91f7c6bc9cb92be5982de4719c1c8af91ccff
Compiled by root on 2016-08-18T01:41Z
Compiled with protoc 2.5.0
From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4
This command was run using
/home/hduser/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 14:20, Sean Busbey <se...@gmail.com> wrote:

> HBase needs HDFS syncs to avoid dataloss during component failure.
>
> What's the output of the command "bin/hbase version"?
>
>
> What's the result of doing the following in the hbase install?
>
> ls -lah lib/hadoop*
>
> On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com> wrote:
>
> yes correct I am using Hbase on hdfs  with hadoop-2.7.3
>
> The file system is ext4.
>
> I was hoping that I can avoid the sync option,
>
> many thanks
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:
>
> > On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> > <mi...@gmail.com> wrote:
> > >
> > >
> > > so the region server started OK but then I had a problem with master :(
> > >
> > > java.lang.IllegalStateException: The procedure WAL relies on the
> > ability to
> > > hsync for proper operation during component failures, but the
> underlying
> > > filesystem does not support doing so. Please check the config value of
> > > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> > > robustness and ensure the config value of 'hbase.wal.dir' points to a
> > > FileSystem mount that can provide it.
> > >
> >
> > This error means that you're running on top of a Filesystem that
> > doesn't provide sync.
> >
> > Are you using HDFS? What version?
> >
>
>
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <se...@gmail.com>.
HBase needs HDFS syncs to avoid dataloss during component failure.

What's the output of the command "bin/hbase version"?


What's the result of doing the following in the hbase install?

ls -lah lib/hadoop*

On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com> wrote:

yes correct I am using Hbase on hdfs  with hadoop-2.7.3

The file system is ext4.

I was hoping that I can avoid the sync option,

many thanks



Dr Mich Talebzadeh



LinkedIn *
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:

> On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> >
> >
> > so the region server started OK but then I had a problem with master :(
> >
> > java.lang.IllegalStateException: The procedure WAL relies on the
> ability to
> > hsync for proper operation during component failures, but the underlying
> > filesystem does not support doing so. Please check the config value of
> > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> > robustness and ensure the config value of 'hbase.wal.dir' points to a
> > FileSystem mount that can provide it.
> >
>
> This error means that you're running on top of a Filesystem that
> doesn't provide sync.
>
> Are you using HDFS? What version?
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <se...@gmail.com>.
HBase needs HDFS syncs to avoid dataloss during component failure.

What's the output of the command "bin/hbase version"?


What's the result of doing the following in the hbase install?

ls -lah lib/hadoop*

On Jun 7, 2018 00:58, "Mich Talebzadeh" <mi...@gmail.com> wrote:

yes correct I am using Hbase on hdfs  with hadoop-2.7.3

The file system is ext4.

I was hoping that I can avoid the sync option,

many thanks



Dr Mich Talebzadeh



LinkedIn *
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:

> On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> >
> >
> > so the region server started OK but then I had a problem with master :(
> >
> > java.lang.IllegalStateException: The procedure WAL relies on the
> ability to
> > hsync for proper operation during component failures, but the underlying
> > filesystem does not support doing so. Please check the config value of
> > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> > robustness and ensure the config value of 'hbase.wal.dir' points to a
> > FileSystem mount that can provide it.
> >
>
> This error means that you're running on top of a Filesystem that
> doesn't provide sync.
>
> Are you using HDFS? What version?
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
yes correct I am using Hbase on hdfs  with hadoop-2.7.3

The file system is ext4.

I was hoping that I can avoid the sync option,

many thanks


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:

> On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> >
> >
> > so the region server started OK but then I had a problem with master :(
> >
> > java.lang.IllegalStateException: The procedure WAL relies on the
> ability to
> > hsync for proper operation during component failures, but the underlying
> > filesystem does not support doing so. Please check the config value of
> > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> > robustness and ensure the config value of 'hbase.wal.dir' points to a
> > FileSystem mount that can provide it.
> >
>
> This error means that you're running on top of a Filesystem that
> doesn't provide sync.
>
> Are you using HDFS? What version?
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
yes correct I am using Hbase on hdfs  with hadoop-2.7.3

The file system is ext4.

I was hoping that I can avoid the sync option,

many thanks


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 7 June 2018 at 01:43, Sean Busbey <bu...@apache.org> wrote:

> On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> >
> >
> > so the region server started OK but then I had a problem with master :(
> >
> > java.lang.IllegalStateException: The procedure WAL relies on the
> ability to
> > hsync for proper operation during component failures, but the underlying
> > filesystem does not support doing so. Please check the config value of
> > 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> > robustness and ensure the config value of 'hbase.wal.dir' points to a
> > FileSystem mount that can provide it.
> >
>
> This error means that you're running on top of a Filesystem that
> doesn't provide sync.
>
> Are you using HDFS? What version?
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <bu...@apache.org>.
On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
<mi...@gmail.com> wrote:
>
>
> so the region server started OK but then I had a problem with master :(
>
> java.lang.IllegalStateException: The procedure WAL relies on the ability to
> hsync for proper operation during component failures, but the underlying
> filesystem does not support doing so. Please check the config value of
> 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> robustness and ensure the config value of 'hbase.wal.dir' points to a
> FileSystem mount that can provide it.
>

This error means that you're running on top of a Filesystem that
doesn't provide sync.

Are you using HDFS? What version?

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <bu...@apache.org>.
On Wed, Jun 6, 2018 at 6:11 PM, Mich Talebzadeh
<mi...@gmail.com> wrote:
>
>
> so the region server started OK but then I had a problem with master :(
>
> java.lang.IllegalStateException: The procedure WAL relies on the ability to
> hsync for proper operation during component failures, but the underlying
> filesystem does not support doing so. Please check the config value of
> 'hbase.procedure.store.wal.use.hsync' to set the desired level of
> robustness and ensure the config value of 'hbase.wal.dir' points to a
> FileSystem mount that can provide it.
>

This error means that you're running on top of a Filesystem that
doesn't provide sync.

Are you using HDFS? What version?

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks all.

in my older version of Hbase 1.2.3 I had added the correct phoenix jar file
(phoenix-4.8.1-HBase-1.2-client.jar) to /lib directory of Hbase.

I found the correct jar file for Hbase 2.0.0
in phoenix-5.0.0-alpha-HBase-2.0-client.jar

jar tvf phoenix-5.0.0-alpha-HBase-2.0-client.jar|grep IndexedWALEditCodec
  1881 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$BinaryCompatibleCompressedIndexKeyValueDecoder.class
  1223 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$BinaryCompatibleIndexKeyValueDecoder.class
   830 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$BinaryCompatiblePhoenixBaseDecoder.class
  1801 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$CompressedIndexKeyValueDecoder.class
  1919 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$CompressedIndexKeyValueEncoder.class
  1143 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$IndexKeyValueDecoder.class
  1345 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$IndexKeyValueEncoder.class
   755 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$PhoenixBaseDecoder.class
   762 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$PhoenixBaseEncoder.class
  4436 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec.class

so the region server started OK but then I had a problem with master :(

java.lang.IllegalStateException: The procedure WAL relies on the ability to
hsync for proper operation during component failures, but the underlying
filesystem does not support doing so. Please check the config value of
'hbase.procedure.store.wal.use.hsync' to set the desired level of
robustness and ensure the config value of 'hbase.wal.dir' points to a
FileSystem mount that can provide it.


I tried that mentioned property in hbase-site.xml but no luck. However, I
saw this recent note
<https://stackoverflow.com/questions/50229580/hbase-shell-cannot-use-error-keepererrorcode-nonode-for-hbase-master>


   - I had similar issues with the recent HBase 2.x beta releases, whereas
   everything was OK with stable 1.x releases. Are you using 2.x beta? –
   VS_FF <https://stackoverflow.com/users/7241513/vs-ff> May 8 at 13:16
   <https://stackoverflow.com/questions/50229580/hbase-shell-cannot-use-error-keepererrorcode-nonode-for-hbase-master#comment87486468_50229580>
   -
   yes, i guess that it is caused by releases problem – Solodye
   <https://stackoverflow.com/users/8351601/solodye> May 10 at 12:58
   <https://stackoverflow.com/questions/50229580/hbase-shell-cannot-use-error-keepererrorcode-nonode-for-hbase-master#comment87563798_50229580>


So I revered back to stable release Hbase 1.2.6 unless someone has resolved
this issue.

Thanks

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 6 June 2018 at 23:24, Juan Jose Escobar <ju...@gmail.com>
wrote:

> Hello Mich,
>
> Verify you have the right jars (from your commnts I guess should be
> phoenix-5.0.0-alpha-HBase-2.0-server.jar), that it shows in HBase
> classpath
> and that it contains the missing class e.g. with jar -vtf.
>
> Also, check if there are any pending WALs that are making the startup fail,
> I had similar problem and Phoenix seemed to cause problems at startup until
> I removed the WALs.
>
>
>
>
>
>
>
>
> On Wed, Jun 6, 2018 at 10:55 PM, Mich Talebzadeh <
> mich.talebzadeh@gmail.com>
> wrote:
>
> > Thanks Sean. I downloaded Phoenix for Hbase version 2
> > (apache-phoenix-5.0.0-alpha-HBase-2.0-bin) but still the same error
> >
> > 2018-06-06 21:45:15,297 INFO  [regionserver/rhes75:16020]
> > wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
> > prefix=rhes75%2C16020%2C1528317910703, suffix=,
> > logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> > 8317910703, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> > 2018-06-06 21:45:15,414 ERROR [regionserver/rhes75:16020]
> > regionserver.HRegionServer: ***** ABORTING region server
> > rhes75,16020,1528317910703: Unhandled: Unable to find
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> > ***
> >
> > *java.lang.UnsupportedOperationException: Unable to find
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec*        at
> > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(
> > ReflectionUtils.java:47)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.
> > create(WALCellCodec.java:112)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> > getCodec(AbstractProtobufLogWriter.java:75)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> > initAfterHeader0(AbstractProtobufLogWriter.java:184)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> > initAfterHeader(AbstractProtobufLogWriter.java:192)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(
> > AbstractProtobufLogWriter.java:174)
> >         at
> > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(
> > AsyncFSWALProvider.java:99)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.
> createWriterInstance(
> > AsyncFSWAL.java:612)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.
> createWriterInstance(
> > AsyncFSWAL.java:124)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(
> > AbstractFSWAL.java:759)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(
> > AbstractFSWAL.java:489)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<
> > init>(AsyncFSWAL.java:251)
> >         at
> > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(
> > AsyncFSWALProvider.java:69)
> >         at
> > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(
> > AsyncFSWALProvider.java:44)
> >         at
> > org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(
> > AbstractFSWALProvider.java:138)
> >         at
> > org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(
> > AbstractFSWALProvider.java:57)
> >         at
> > org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:252)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.
> > getWAL(HRegionServer.java:2065)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.
> > buildServerLoad(HRegionServer.java:1291)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.
> tryRegionServerReport(
> > HRegionServer.java:1172)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.
> > run(HRegionServer.java:989)
> >         at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
> >         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> >         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >         at java.lang.Class.forName0(Native Method)
> >         at java.lang.Class.forName(Class.java:264)
> >         at
> > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(
> > ReflectionUtils.java:43)
> >         ... 21 more
> > 2018-06-06 21:45:15,415 ERROR [regionserver/rhes75:16020]
> > regionserver.HRegionServer: RegionServer abort: loaded coprocessors are:
> []
> > 2018-06-06 21:45:15,420 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer:    "Verbose" : false,
> >     "ObjectPendingFinalizationCount" : 0,
> >     "NonHeapMemoryUsage" : {
> >       "committed" : 59793408,
> >       "init" : 2555904,
> >       "max" : -1,
> >       "used" : 58519176
> >     },
> >     "HeapMemoryUsage" : {
> >       "committed" : 1017708544,
> >       "init" : 1052770304,
> >       "max" : 16777347072,
> >       "used" : 255809352
> >     },
> >     "ObjectName" : "java.lang:type=Memory"
> >   } ],
> >   "beans" : [ {
> >     "name" : "Hadoop:service=HBase,name=RegionServer,sub=IPC",
> >     "modelerType" : "RegionServer,sub=IPC",
> >     "tag.Context" : "regionserver",
> >     "tag.Hostname" : "rhes75"
> >   } ],
> >   "beans" : [ {
> >     "name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication",
> >     "modelerType" : "RegionServer,sub=Replication",
> >     "tag.Context" : "regionserver",
> >     "tag.Hostname" : "rhes75"
> >   } ],
> >   "beans" : [ {
> >     "name" : "Hadoop:service=HBase,name=RegionServer,sub=Server",
> >     "modelerType" : "RegionServer,sub=Server",
> >     "tag.Context" : "regionserver",
> >     "tag.Hostname" : "rhes75"
> >   } ]
> > }
> > 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: ***** STOPPING region server
> > 'rhes75,16020,1528317910703' *****
> > 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: STOPPED: Unhandled: Unable to find
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
> > 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> > regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
> > 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: Stopping infoServer
> > 2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
> > regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting.
> > 2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
> > regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528317910703
> > exiting
> > 2018-06-06 21:45:15,434 INFO  [regionserver/rhes75:16020]
> > handler.ContextHandler: Stopped o.e.j.w.WebAppContext@1e530163
> > {/,null,UNAVAILABLE}{file:/data6/hduser/hbase-2.0.0/
> > hbase-webapps/regionserver}
> > 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> > server.AbstractConnector: Stopped ServerConnector@5c60b0a0
> > {HTTP/1.1,[http/1.1]}{0.0.0.0:16030}
> > 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> > handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@39651a82
> > {/static,file:///data6/hduser/hbase-2.0.0/hbase-webapps/
> > static/,UNAVAILABLE}
> > 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> > handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@70211e49
> > {/logs,file:///data6/hduser/hbase-2.0.0/logs/,UNAVAILABLE}
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > regionserver.HeapMemoryManager: Stopping
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > flush.RegionServerFlushTableProcedureManager: Stopping region server
> flush
> > procedure manager abruptly.
> > 2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.1]
> > regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > snapshot.RegionServerSnapshotManager: Stopping
> RegionServerSnapshotManager
> > abruptly.
> > 2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.0]
> > regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: aborting server rhes75,16020,1528317910703
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > zookeeper.ReadOnlyZKClient: Close zookeeper connection 0x2a9ccc02 to
> > localhost:2181
> > 2018-06-06 21:45:15,439 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: stopping server rhes75,16020,1528317910703;
> all
> > regions closed.
> > 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
> > regionserver.Leases: Closed leases
> > 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
> > hbase.ChoreService: Chore service for: regionserver/rhes75:16020 had
> > [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit:
> > MILLISECONDS], [ScheduledChore: Nam
> > e: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS],
> > [ScheduledChore: Name: MovedRegionsCleaner for region
> > rhes75,16020,1528317910703 Period: 120000 Unit: MILLISECONDS],
> > [ScheduledChore: Name: MemstoreFlusherChore Period: 1
> > 0000 Unit: MILLISECONDS]] on shutdown
> > 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020.logRoller]
> > regionserver.LogRoller: LogRoller exiting.
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> > AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> > OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> > On 6 June 2018 at 20:49, Sean Busbey <bu...@apache.org> wrote:
> >
> > > IndexedWALEditCodec is a class from the Apache Phoenix project. your
> > > cluster must be configured to have Phoenix run but it can't find the
> > > jars for phoenix.
> > >
> > > user@phoenix.apache.org is probably your best bet for getting things
> > > going.
> > >
> > > On Wed, Jun 6, 2018 at 1:52 PM, Mich Talebzadeh
> > > <mi...@gmail.com> wrote:
> > > > Hi,
> > > >
> > > > I have an old Hbase hbase-1.2.3 that runs fine on both RHES 5.6 and
> > RHES
> > > 7.5
> > > >
> > > > I created a new Hbase hbase-2.0.0 instance on RHES 7.5.
> > > >
> > > > I seem to have a problem with my region server as it fails to start
> > > > throwing error
> > > >
> > > > 2018-06-06 19:28:37,033 INFO  [regionserver/rhes75:16020]
> > > > regionserver.HRegionServer: CompactionChecker runs every PT10S
> > > > 2018-06-06 19:28:37,071 INFO  [SplitLogWorker-rhes75:16020]
> > > > regionserver.SplitLogWorker: SplitLogWorker
> rhes75,16020,1528309715572
> > > > starting
> > > > 2018-06-06 19:28:37,073 INFO  [regionserver/rhes75:16020]
> > > > regionserver.HeapMemoryManager: Starting, tuneOn=false
> > > > 2018-06-06 19:28:37,076 INFO  [regionserver/rhes75:16020]
> > > > regionserver.ChunkCreator: Allocating data MemStoreChunkPool with
> chunk
> > > > size 2 MB, max count 2880, initial count 0
> > > > 2018-06-06 19:28:37,077 INFO  [regionserver/rhes75:16020]
> > > > regionserver.ChunkCreator: Allocating index MemStoreChunkPool with
> > chunk
> > > > size 204.80 KB, max count 3200, initial count 0
> > > > 2018-06-06 19:28:37,078 INFO  [ReplicationExecutor-0]
> > > > regionserver.ReplicationSourceManager: Current list of replicators:
> > > > [rhes75,16020,1528309715572] other RSs: [rhes75,16020,1528309715572]
> > > > 2018-06-06 19:28:37,099 INFO  [regionserver/rhes75:16020]
> > > > regionserver.HRegionServer: Serving as rhes75,16020,1528309715572,
> > > > RpcServer on rhes75/50.140.197.220:16020,
> sessionid=0x163d61b308c0033
> > > > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > > > quotas.RegionServerRpcQuotaManager: Quota support disabled
> > > > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > > > quotas.RegionServerSpaceQuotaManager: Quota support disabled, not
> > > starting
> > > > space quota manager.
> > > > 2018-06-06 19:28:40,133 INFO  [regionserver/rhes75:16020]
> > > > wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128
> > MB,
> > > > prefix=rhes75%2C16020%2C1528309715572, suffix=,
> > > > logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> > > > 8309715572, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> > > > 2018-06-06 19:28:40,251 ERROR [regionserver/rhes75:16020]
> > > > regionserver.HRegionServer: ***** ABORTING region server
> > > > rhes75,16020,1528309715572: Unhandled: Unable to find
> > > > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> > > > ***
> > > >
> > > > I cannot seem to be able to fix this even after removing hbase
> > directory
> > > > from hdfs and zookeeper! Any ideas will be appreciated.
> > > >
> > > > Thanks
> > > >
> > > > Dr Mich Talebzadeh
> > > >
> > > >
> > > >
> > > > LinkedIn * https://www.linkedin.com/profile/view?id=
> > > AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > <https://www.linkedin.com/profile/view?id=
> > AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> > > OABUrV8Pw>*
> > > >
> > > >
> > > >
> > > > http://talebzadehmich.wordpress.com
> > > >
> > > >
> > > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> > any
> > > > loss, damage or destruction of data or any other property which may
> > arise
> > > > from relying on this email's technical content is explicitly
> > disclaimed.
> > > > The author will in no case be liable for any monetary damages arising
> > > from
> > > > such loss, damage or destruction.
> > >
> >
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks all.

in my older version of Hbase 1.2.3 I had added the correct phoenix jar file
(phoenix-4.8.1-HBase-1.2-client.jar) to /lib directory of Hbase.

I found the correct jar file for Hbase 2.0.0
in phoenix-5.0.0-alpha-HBase-2.0-client.jar

jar tvf phoenix-5.0.0-alpha-HBase-2.0-client.jar|grep IndexedWALEditCodec
  1881 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$BinaryCompatibleCompressedIndexKeyValueDecoder.class
  1223 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$BinaryCompatibleIndexKeyValueDecoder.class
   830 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$BinaryCompatiblePhoenixBaseDecoder.class
  1801 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$CompressedIndexKeyValueDecoder.class
  1919 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$CompressedIndexKeyValueEncoder.class
  1143 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$IndexKeyValueDecoder.class
  1345 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$IndexKeyValueEncoder.class
   755 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$PhoenixBaseDecoder.class
   762 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec$PhoenixBaseEncoder.class
  4436 Thu Feb 08 17:36:50 GMT 2018
org/apache/hadoop/hbase/regionserver/wal/IndexedWALEditCodec.class

so the region server started OK but then I had a problem with master :(

java.lang.IllegalStateException: The procedure WAL relies on the ability to
hsync for proper operation during component failures, but the underlying
filesystem does not support doing so. Please check the config value of
'hbase.procedure.store.wal.use.hsync' to set the desired level of
robustness and ensure the config value of 'hbase.wal.dir' points to a
FileSystem mount that can provide it.


I tried that mentioned property in hbase-site.xml but no luck. However, I
saw this recent note
<https://stackoverflow.com/questions/50229580/hbase-shell-cannot-use-error-keepererrorcode-nonode-for-hbase-master>


   - I had similar issues with the recent HBase 2.x beta releases, whereas
   everything was OK with stable 1.x releases. Are you using 2.x beta? –
   VS_FF <https://stackoverflow.com/users/7241513/vs-ff> May 8 at 13:16
   <https://stackoverflow.com/questions/50229580/hbase-shell-cannot-use-error-keepererrorcode-nonode-for-hbase-master#comment87486468_50229580>
   -
   yes, i guess that it is caused by releases problem – Solodye
   <https://stackoverflow.com/users/8351601/solodye> May 10 at 12:58
   <https://stackoverflow.com/questions/50229580/hbase-shell-cannot-use-error-keepererrorcode-nonode-for-hbase-master#comment87563798_50229580>


So I revered back to stable release Hbase 1.2.6 unless someone has resolved
this issue.

Thanks

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 6 June 2018 at 23:24, Juan Jose Escobar <ju...@gmail.com>
wrote:

> Hello Mich,
>
> Verify you have the right jars (from your commnts I guess should be
> phoenix-5.0.0-alpha-HBase-2.0-server.jar), that it shows in HBase
> classpath
> and that it contains the missing class e.g. with jar -vtf.
>
> Also, check if there are any pending WALs that are making the startup fail,
> I had similar problem and Phoenix seemed to cause problems at startup until
> I removed the WALs.
>
>
>
>
>
>
>
>
> On Wed, Jun 6, 2018 at 10:55 PM, Mich Talebzadeh <
> mich.talebzadeh@gmail.com>
> wrote:
>
> > Thanks Sean. I downloaded Phoenix for Hbase version 2
> > (apache-phoenix-5.0.0-alpha-HBase-2.0-bin) but still the same error
> >
> > 2018-06-06 21:45:15,297 INFO  [regionserver/rhes75:16020]
> > wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
> > prefix=rhes75%2C16020%2C1528317910703, suffix=,
> > logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> > 8317910703, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> > 2018-06-06 21:45:15,414 ERROR [regionserver/rhes75:16020]
> > regionserver.HRegionServer: ***** ABORTING region server
> > rhes75,16020,1528317910703: Unhandled: Unable to find
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> > ***
> >
> > *java.lang.UnsupportedOperationException: Unable to find
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec*        at
> > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(
> > ReflectionUtils.java:47)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.
> > create(WALCellCodec.java:112)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> > getCodec(AbstractProtobufLogWriter.java:75)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> > initAfterHeader0(AbstractProtobufLogWriter.java:184)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> > initAfterHeader(AbstractProtobufLogWriter.java:192)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(
> > AbstractProtobufLogWriter.java:174)
> >         at
> > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(
> > AsyncFSWALProvider.java:99)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.
> createWriterInstance(
> > AsyncFSWAL.java:612)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.
> createWriterInstance(
> > AsyncFSWAL.java:124)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(
> > AbstractFSWAL.java:759)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(
> > AbstractFSWAL.java:489)
> >         at
> > org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<
> > init>(AsyncFSWAL.java:251)
> >         at
> > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(
> > AsyncFSWALProvider.java:69)
> >         at
> > org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(
> > AsyncFSWALProvider.java:44)
> >         at
> > org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(
> > AbstractFSWALProvider.java:138)
> >         at
> > org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(
> > AbstractFSWALProvider.java:57)
> >         at
> > org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:252)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.
> > getWAL(HRegionServer.java:2065)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.
> > buildServerLoad(HRegionServer.java:1291)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.
> tryRegionServerReport(
> > HRegionServer.java:1172)
> >         at
> > org.apache.hadoop.hbase.regionserver.HRegionServer.
> > run(HRegionServer.java:989)
> >         at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.lang.ClassNotFoundException:
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
> >         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> >         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >         at java.lang.Class.forName0(Native Method)
> >         at java.lang.Class.forName(Class.java:264)
> >         at
> > org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(
> > ReflectionUtils.java:43)
> >         ... 21 more
> > 2018-06-06 21:45:15,415 ERROR [regionserver/rhes75:16020]
> > regionserver.HRegionServer: RegionServer abort: loaded coprocessors are:
> []
> > 2018-06-06 21:45:15,420 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer:    "Verbose" : false,
> >     "ObjectPendingFinalizationCount" : 0,
> >     "NonHeapMemoryUsage" : {
> >       "committed" : 59793408,
> >       "init" : 2555904,
> >       "max" : -1,
> >       "used" : 58519176
> >     },
> >     "HeapMemoryUsage" : {
> >       "committed" : 1017708544,
> >       "init" : 1052770304,
> >       "max" : 16777347072,
> >       "used" : 255809352
> >     },
> >     "ObjectName" : "java.lang:type=Memory"
> >   } ],
> >   "beans" : [ {
> >     "name" : "Hadoop:service=HBase,name=RegionServer,sub=IPC",
> >     "modelerType" : "RegionServer,sub=IPC",
> >     "tag.Context" : "regionserver",
> >     "tag.Hostname" : "rhes75"
> >   } ],
> >   "beans" : [ {
> >     "name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication",
> >     "modelerType" : "RegionServer,sub=Replication",
> >     "tag.Context" : "regionserver",
> >     "tag.Hostname" : "rhes75"
> >   } ],
> >   "beans" : [ {
> >     "name" : "Hadoop:service=HBase,name=RegionServer,sub=Server",
> >     "modelerType" : "RegionServer,sub=Server",
> >     "tag.Context" : "regionserver",
> >     "tag.Hostname" : "rhes75"
> >   } ]
> > }
> > 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: ***** STOPPING region server
> > 'rhes75,16020,1528317910703' *****
> > 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: STOPPED: Unhandled: Unable to find
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
> > 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> > regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
> > 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: Stopping infoServer
> > 2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
> > regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting.
> > 2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
> > regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528317910703
> > exiting
> > 2018-06-06 21:45:15,434 INFO  [regionserver/rhes75:16020]
> > handler.ContextHandler: Stopped o.e.j.w.WebAppContext@1e530163
> > {/,null,UNAVAILABLE}{file:/data6/hduser/hbase-2.0.0/
> > hbase-webapps/regionserver}
> > 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> > server.AbstractConnector: Stopped ServerConnector@5c60b0a0
> > {HTTP/1.1,[http/1.1]}{0.0.0.0:16030}
> > 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> > handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@39651a82
> > {/static,file:///data6/hduser/hbase-2.0.0/hbase-webapps/
> > static/,UNAVAILABLE}
> > 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> > handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@70211e49
> > {/logs,file:///data6/hduser/hbase-2.0.0/logs/,UNAVAILABLE}
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > regionserver.HeapMemoryManager: Stopping
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > flush.RegionServerFlushTableProcedureManager: Stopping region server
> flush
> > procedure manager abruptly.
> > 2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.1]
> > regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > snapshot.RegionServerSnapshotManager: Stopping
> RegionServerSnapshotManager
> > abruptly.
> > 2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.0]
> > regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: aborting server rhes75,16020,1528317910703
> > 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> > zookeeper.ReadOnlyZKClient: Close zookeeper connection 0x2a9ccc02 to
> > localhost:2181
> > 2018-06-06 21:45:15,439 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: stopping server rhes75,16020,1528317910703;
> all
> > regions closed.
> > 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
> > regionserver.Leases: Closed leases
> > 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
> > hbase.ChoreService: Chore service for: regionserver/rhes75:16020 had
> > [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit:
> > MILLISECONDS], [ScheduledChore: Nam
> > e: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS],
> > [ScheduledChore: Name: MovedRegionsCleaner for region
> > rhes75,16020,1528317910703 Period: 120000 Unit: MILLISECONDS],
> > [ScheduledChore: Name: MemstoreFlusherChore Period: 1
> > 0000 Unit: MILLISECONDS]] on shutdown
> > 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020.logRoller]
> > regionserver.LogRoller: LogRoller exiting.
> >
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> > AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> > OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> > On 6 June 2018 at 20:49, Sean Busbey <bu...@apache.org> wrote:
> >
> > > IndexedWALEditCodec is a class from the Apache Phoenix project. your
> > > cluster must be configured to have Phoenix run but it can't find the
> > > jars for phoenix.
> > >
> > > user@phoenix.apache.org is probably your best bet for getting things
> > > going.
> > >
> > > On Wed, Jun 6, 2018 at 1:52 PM, Mich Talebzadeh
> > > <mi...@gmail.com> wrote:
> > > > Hi,
> > > >
> > > > I have an old Hbase hbase-1.2.3 that runs fine on both RHES 5.6 and
> > RHES
> > > 7.5
> > > >
> > > > I created a new Hbase hbase-2.0.0 instance on RHES 7.5.
> > > >
> > > > I seem to have a problem with my region server as it fails to start
> > > > throwing error
> > > >
> > > > 2018-06-06 19:28:37,033 INFO  [regionserver/rhes75:16020]
> > > > regionserver.HRegionServer: CompactionChecker runs every PT10S
> > > > 2018-06-06 19:28:37,071 INFO  [SplitLogWorker-rhes75:16020]
> > > > regionserver.SplitLogWorker: SplitLogWorker
> rhes75,16020,1528309715572
> > > > starting
> > > > 2018-06-06 19:28:37,073 INFO  [regionserver/rhes75:16020]
> > > > regionserver.HeapMemoryManager: Starting, tuneOn=false
> > > > 2018-06-06 19:28:37,076 INFO  [regionserver/rhes75:16020]
> > > > regionserver.ChunkCreator: Allocating data MemStoreChunkPool with
> chunk
> > > > size 2 MB, max count 2880, initial count 0
> > > > 2018-06-06 19:28:37,077 INFO  [regionserver/rhes75:16020]
> > > > regionserver.ChunkCreator: Allocating index MemStoreChunkPool with
> > chunk
> > > > size 204.80 KB, max count 3200, initial count 0
> > > > 2018-06-06 19:28:37,078 INFO  [ReplicationExecutor-0]
> > > > regionserver.ReplicationSourceManager: Current list of replicators:
> > > > [rhes75,16020,1528309715572] other RSs: [rhes75,16020,1528309715572]
> > > > 2018-06-06 19:28:37,099 INFO  [regionserver/rhes75:16020]
> > > > regionserver.HRegionServer: Serving as rhes75,16020,1528309715572,
> > > > RpcServer on rhes75/50.140.197.220:16020,
> sessionid=0x163d61b308c0033
> > > > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > > > quotas.RegionServerRpcQuotaManager: Quota support disabled
> > > > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > > > quotas.RegionServerSpaceQuotaManager: Quota support disabled, not
> > > starting
> > > > space quota manager.
> > > > 2018-06-06 19:28:40,133 INFO  [regionserver/rhes75:16020]
> > > > wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128
> > MB,
> > > > prefix=rhes75%2C16020%2C1528309715572, suffix=,
> > > > logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> > > > 8309715572, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> > > > 2018-06-06 19:28:40,251 ERROR [regionserver/rhes75:16020]
> > > > regionserver.HRegionServer: ***** ABORTING region server
> > > > rhes75,16020,1528309715572: Unhandled: Unable to find
> > > > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> > > > ***
> > > >
> > > > I cannot seem to be able to fix this even after removing hbase
> > directory
> > > > from hdfs and zookeeper! Any ideas will be appreciated.
> > > >
> > > > Thanks
> > > >
> > > > Dr Mich Talebzadeh
> > > >
> > > >
> > > >
> > > > LinkedIn * https://www.linkedin.com/profile/view?id=
> > > AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > > <https://www.linkedin.com/profile/view?id=
> > AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> > > OABUrV8Pw>*
> > > >
> > > >
> > > >
> > > > http://talebzadehmich.wordpress.com
> > > >
> > > >
> > > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> > any
> > > > loss, damage or destruction of data or any other property which may
> > arise
> > > > from relying on this email's technical content is explicitly
> > disclaimed.
> > > > The author will in no case be liable for any monetary damages arising
> > > from
> > > > such loss, damage or destruction.
> > >
> >
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Juan Jose Escobar <ju...@gmail.com>.
Hello Mich,

Verify you have the right jars (from your commnts I guess should be
phoenix-5.0.0-alpha-HBase-2.0-server.jar), that it shows in HBase classpath
and that it contains the missing class e.g. with jar -vtf.

Also, check if there are any pending WALs that are making the startup fail,
I had similar problem and Phoenix seemed to cause problems at startup until
I removed the WALs.








On Wed, Jun 6, 2018 at 10:55 PM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Thanks Sean. I downloaded Phoenix for Hbase version 2
> (apache-phoenix-5.0.0-alpha-HBase-2.0-bin) but still the same error
>
> 2018-06-06 21:45:15,297 INFO  [regionserver/rhes75:16020]
> wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
> prefix=rhes75%2C16020%2C1528317910703, suffix=,
> logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> 8317910703, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> 2018-06-06 21:45:15,414 ERROR [regionserver/rhes75:16020]
> regionserver.HRegionServer: ***** ABORTING region server
> rhes75,16020,1528317910703: Unhandled: Unable to find
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> ***
>
> *java.lang.UnsupportedOperationException: Unable to find
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec*        at
> org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(
> ReflectionUtils.java:47)
>         at
> org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.
> create(WALCellCodec.java:112)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> getCodec(AbstractProtobufLogWriter.java:75)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> initAfterHeader0(AbstractProtobufLogWriter.java:184)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.
> initAfterHeader(AbstractProtobufLogWriter.java:192)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(
> AbstractProtobufLogWriter.java:174)
>         at
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(
> AsyncFSWALProvider.java:99)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(
> AsyncFSWAL.java:612)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(
> AsyncFSWAL.java:124)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(
> AbstractFSWAL.java:759)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(
> AbstractFSWAL.java:489)
>         at
> org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<
> init>(AsyncFSWAL.java:251)
>         at
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(
> AsyncFSWALProvider.java:69)
>         at
> org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(
> AsyncFSWALProvider.java:44)
>         at
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(
> AbstractFSWALProvider.java:138)
>         at
> org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(
> AbstractFSWALProvider.java:57)
>         at
> org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:252)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.
> getWAL(HRegionServer.java:2065)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.
> buildServerLoad(HRegionServer.java:1291)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(
> HRegionServer.java:1172)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.
> run(HRegionServer.java:989)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:264)
>         at
> org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(
> ReflectionUtils.java:43)
>         ... 21 more
> 2018-06-06 21:45:15,415 ERROR [regionserver/rhes75:16020]
> regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
> 2018-06-06 21:45:15,420 INFO  [regionserver/rhes75:16020]
> regionserver.HRegionServer:    "Verbose" : false,
>     "ObjectPendingFinalizationCount" : 0,
>     "NonHeapMemoryUsage" : {
>       "committed" : 59793408,
>       "init" : 2555904,
>       "max" : -1,
>       "used" : 58519176
>     },
>     "HeapMemoryUsage" : {
>       "committed" : 1017708544,
>       "init" : 1052770304,
>       "max" : 16777347072,
>       "used" : 255809352
>     },
>     "ObjectName" : "java.lang:type=Memory"
>   } ],
>   "beans" : [ {
>     "name" : "Hadoop:service=HBase,name=RegionServer,sub=IPC",
>     "modelerType" : "RegionServer,sub=IPC",
>     "tag.Context" : "regionserver",
>     "tag.Hostname" : "rhes75"
>   } ],
>   "beans" : [ {
>     "name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication",
>     "modelerType" : "RegionServer,sub=Replication",
>     "tag.Context" : "regionserver",
>     "tag.Hostname" : "rhes75"
>   } ],
>   "beans" : [ {
>     "name" : "Hadoop:service=HBase,name=RegionServer,sub=Server",
>     "modelerType" : "RegionServer,sub=Server",
>     "tag.Context" : "regionserver",
>     "tag.Hostname" : "rhes75"
>   } ]
> }
> 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> regionserver.HRegionServer: ***** STOPPING region server
> 'rhes75,16020,1528317910703' *****
> 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> regionserver.HRegionServer: STOPPED: Unhandled: Unable to find
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
> 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
> 2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
> regionserver.HRegionServer: Stopping infoServer
> 2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
> regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting.
> 2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
> regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528317910703
> exiting
> 2018-06-06 21:45:15,434 INFO  [regionserver/rhes75:16020]
> handler.ContextHandler: Stopped o.e.j.w.WebAppContext@1e530163
> {/,null,UNAVAILABLE}{file:/data6/hduser/hbase-2.0.0/
> hbase-webapps/regionserver}
> 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> server.AbstractConnector: Stopped ServerConnector@5c60b0a0
> {HTTP/1.1,[http/1.1]}{0.0.0.0:16030}
> 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@39651a82
> {/static,file:///data6/hduser/hbase-2.0.0/hbase-webapps/
> static/,UNAVAILABLE}
> 2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
> handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@70211e49
> {/logs,file:///data6/hduser/hbase-2.0.0/logs/,UNAVAILABLE}
> 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> regionserver.HeapMemoryManager: Stopping
> 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> flush.RegionServerFlushTableProcedureManager: Stopping region server flush
> procedure manager abruptly.
> 2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.1]
> regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
> 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager
> abruptly.
> 2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.0]
> regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
> 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> regionserver.HRegionServer: aborting server rhes75,16020,1528317910703
> 2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
> zookeeper.ReadOnlyZKClient: Close zookeeper connection 0x2a9ccc02 to
> localhost:2181
> 2018-06-06 21:45:15,439 INFO  [regionserver/rhes75:16020]
> regionserver.HRegionServer: stopping server rhes75,16020,1528317910703; all
> regions closed.
> 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
> regionserver.Leases: Closed leases
> 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
> hbase.ChoreService: Chore service for: regionserver/rhes75:16020 had
> [[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit:
> MILLISECONDS], [ScheduledChore: Nam
> e: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS],
> [ScheduledChore: Name: MovedRegionsCleaner for region
> rhes75,16020,1528317910703 Period: 120000 Unit: MILLISECONDS],
> [ScheduledChore: Name: MemstoreFlusherChore Period: 1
> 0000 Unit: MILLISECONDS]] on shutdown
> 2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020.logRoller]
> regionserver.LogRoller: LogRoller exiting.
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 6 June 2018 at 20:49, Sean Busbey <bu...@apache.org> wrote:
>
> > IndexedWALEditCodec is a class from the Apache Phoenix project. your
> > cluster must be configured to have Phoenix run but it can't find the
> > jars for phoenix.
> >
> > user@phoenix.apache.org is probably your best bet for getting things
> > going.
> >
> > On Wed, Jun 6, 2018 at 1:52 PM, Mich Talebzadeh
> > <mi...@gmail.com> wrote:
> > > Hi,
> > >
> > > I have an old Hbase hbase-1.2.3 that runs fine on both RHES 5.6 and
> RHES
> > 7.5
> > >
> > > I created a new Hbase hbase-2.0.0 instance on RHES 7.5.
> > >
> > > I seem to have a problem with my region server as it fails to start
> > > throwing error
> > >
> > > 2018-06-06 19:28:37,033 INFO  [regionserver/rhes75:16020]
> > > regionserver.HRegionServer: CompactionChecker runs every PT10S
> > > 2018-06-06 19:28:37,071 INFO  [SplitLogWorker-rhes75:16020]
> > > regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528309715572
> > > starting
> > > 2018-06-06 19:28:37,073 INFO  [regionserver/rhes75:16020]
> > > regionserver.HeapMemoryManager: Starting, tuneOn=false
> > > 2018-06-06 19:28:37,076 INFO  [regionserver/rhes75:16020]
> > > regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk
> > > size 2 MB, max count 2880, initial count 0
> > > 2018-06-06 19:28:37,077 INFO  [regionserver/rhes75:16020]
> > > regionserver.ChunkCreator: Allocating index MemStoreChunkPool with
> chunk
> > > size 204.80 KB, max count 3200, initial count 0
> > > 2018-06-06 19:28:37,078 INFO  [ReplicationExecutor-0]
> > > regionserver.ReplicationSourceManager: Current list of replicators:
> > > [rhes75,16020,1528309715572] other RSs: [rhes75,16020,1528309715572]
> > > 2018-06-06 19:28:37,099 INFO  [regionserver/rhes75:16020]
> > > regionserver.HRegionServer: Serving as rhes75,16020,1528309715572,
> > > RpcServer on rhes75/50.140.197.220:16020, sessionid=0x163d61b308c0033
> > > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > > quotas.RegionServerRpcQuotaManager: Quota support disabled
> > > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > > quotas.RegionServerSpaceQuotaManager: Quota support disabled, not
> > starting
> > > space quota manager.
> > > 2018-06-06 19:28:40,133 INFO  [regionserver/rhes75:16020]
> > > wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128
> MB,
> > > prefix=rhes75%2C16020%2C1528309715572, suffix=,
> > > logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> > > 8309715572, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> > > 2018-06-06 19:28:40,251 ERROR [regionserver/rhes75:16020]
> > > regionserver.HRegionServer: ***** ABORTING region server
> > > rhes75,16020,1528309715572: Unhandled: Unable to find
> > > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> > > ***
> > >
> > > I cannot seem to be able to fix this even after removing hbase
> directory
> > > from hdfs and zookeeper! Any ideas will be appreciated.
> > >
> > > Thanks
> > >
> > > Dr Mich Talebzadeh
> > >
> > >
> > >
> > > LinkedIn * https://www.linkedin.com/profile/view?id=
> > AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > > <https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> > OABUrV8Pw>*
> > >
> > >
> > >
> > > http://talebzadehmich.wordpress.com
> > >
> > >
> > > *Disclaimer:* Use it at your own risk. Any and all responsibility for
> any
> > > loss, damage or destruction of data or any other property which may
> arise
> > > from relying on this email's technical content is explicitly
> disclaimed.
> > > The author will in no case be liable for any monetary damages arising
> > from
> > > such loss, damage or destruction.
> >
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks Sean. I downloaded Phoenix for Hbase version 2
(apache-phoenix-5.0.0-alpha-HBase-2.0-bin) but still the same error

2018-06-06 21:45:15,297 INFO  [regionserver/rhes75:16020]
wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
prefix=rhes75%2C16020%2C1528317910703, suffix=,
logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
8317910703, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
2018-06-06 21:45:15,414 ERROR [regionserver/rhes75:16020]
regionserver.HRegionServer: ***** ABORTING region server
rhes75,16020,1528317910703: Unhandled: Unable to find
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
***

*java.lang.UnsupportedOperationException: Unable to find
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec*        at
org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:47)
        at
org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:112)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.getCodec(AbstractProtobufLogWriter.java:75)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.initAfterHeader0(AbstractProtobufLogWriter.java:184)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.initAfterHeader(AbstractProtobufLogWriter.java:192)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:174)
        at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:99)
        at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:612)
        at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:124)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:759)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:489)
        at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<init>(AsyncFSWAL.java:251)
        at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:69)
        at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:44)
        at
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
        at
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
        at
org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:252)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2065)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1291)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1172)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:989)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:264)
        at
org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:43)
        ... 21 more
2018-06-06 21:45:15,415 ERROR [regionserver/rhes75:16020]
regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2018-06-06 21:45:15,420 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer:    "Verbose" : false,
    "ObjectPendingFinalizationCount" : 0,
    "NonHeapMemoryUsage" : {
      "committed" : 59793408,
      "init" : 2555904,
      "max" : -1,
      "used" : 58519176
    },
    "HeapMemoryUsage" : {
      "committed" : 1017708544,
      "init" : 1052770304,
      "max" : 16777347072,
      "used" : 255809352
    },
    "ObjectName" : "java.lang:type=Memory"
  } ],
  "beans" : [ {
    "name" : "Hadoop:service=HBase,name=RegionServer,sub=IPC",
    "modelerType" : "RegionServer,sub=IPC",
    "tag.Context" : "regionserver",
    "tag.Hostname" : "rhes75"
  } ],
  "beans" : [ {
    "name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication",
    "modelerType" : "RegionServer,sub=Replication",
    "tag.Context" : "regionserver",
    "tag.Hostname" : "rhes75"
  } ],
  "beans" : [ {
    "name" : "Hadoop:service=HBase,name=RegionServer,sub=Server",
    "modelerType" : "RegionServer,sub=Server",
    "tag.Context" : "regionserver",
    "tag.Hostname" : "rhes75"
  } ]
}
2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: ***** STOPPING region server
'rhes75,16020,1528317910703' *****
2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: STOPPED: Unhandled: Unable to find
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: Stopping infoServer
2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting.
2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528317910703
exiting
2018-06-06 21:45:15,434 INFO  [regionserver/rhes75:16020]
handler.ContextHandler: Stopped o.e.j.w.WebAppContext@1e530163
{/,null,UNAVAILABLE}{file:/data6/hduser/hbase-2.0.0/hbase-webapps/regionserver}
2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
server.AbstractConnector: Stopped ServerConnector@5c60b0a0
{HTTP/1.1,[http/1.1]}{0.0.0.0:16030}
2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@39651a82
{/static,file:///data6/hduser/hbase-2.0.0/hbase-webapps/static/,UNAVAILABLE}
2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@70211e49
{/logs,file:///data6/hduser/hbase-2.0.0/logs/,UNAVAILABLE}
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
regionserver.HeapMemoryManager: Stopping
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
flush.RegionServerFlushTableProcedureManager: Stopping region server flush
procedure manager abruptly.
2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.1]
regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager
abruptly.
2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.0]
regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: aborting server rhes75,16020,1528317910703
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
zookeeper.ReadOnlyZKClient: Close zookeeper connection 0x2a9ccc02 to
localhost:2181
2018-06-06 21:45:15,439 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: stopping server rhes75,16020,1528317910703; all
regions closed.
2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
regionserver.Leases: Closed leases
2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
hbase.ChoreService: Chore service for: regionserver/rhes75:16020 had
[[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit:
MILLISECONDS], [ScheduledChore: Nam
e: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS],
[ScheduledChore: Name: MovedRegionsCleaner for region
rhes75,16020,1528317910703 Period: 120000 Unit: MILLISECONDS],
[ScheduledChore: Name: MemstoreFlusherChore Period: 1
0000 Unit: MILLISECONDS]] on shutdown
2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020.logRoller]
regionserver.LogRoller: LogRoller exiting.


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 6 June 2018 at 20:49, Sean Busbey <bu...@apache.org> wrote:

> IndexedWALEditCodec is a class from the Apache Phoenix project. your
> cluster must be configured to have Phoenix run but it can't find the
> jars for phoenix.
>
> user@phoenix.apache.org is probably your best bet for getting things
> going.
>
> On Wed, Jun 6, 2018 at 1:52 PM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> > Hi,
> >
> > I have an old Hbase hbase-1.2.3 that runs fine on both RHES 5.6 and RHES
> 7.5
> >
> > I created a new Hbase hbase-2.0.0 instance on RHES 7.5.
> >
> > I seem to have a problem with my region server as it fails to start
> > throwing error
> >
> > 2018-06-06 19:28:37,033 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: CompactionChecker runs every PT10S
> > 2018-06-06 19:28:37,071 INFO  [SplitLogWorker-rhes75:16020]
> > regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528309715572
> > starting
> > 2018-06-06 19:28:37,073 INFO  [regionserver/rhes75:16020]
> > regionserver.HeapMemoryManager: Starting, tuneOn=false
> > 2018-06-06 19:28:37,076 INFO  [regionserver/rhes75:16020]
> > regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk
> > size 2 MB, max count 2880, initial count 0
> > 2018-06-06 19:28:37,077 INFO  [regionserver/rhes75:16020]
> > regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk
> > size 204.80 KB, max count 3200, initial count 0
> > 2018-06-06 19:28:37,078 INFO  [ReplicationExecutor-0]
> > regionserver.ReplicationSourceManager: Current list of replicators:
> > [rhes75,16020,1528309715572] other RSs: [rhes75,16020,1528309715572]
> > 2018-06-06 19:28:37,099 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: Serving as rhes75,16020,1528309715572,
> > RpcServer on rhes75/50.140.197.220:16020, sessionid=0x163d61b308c0033
> > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > quotas.RegionServerRpcQuotaManager: Quota support disabled
> > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > quotas.RegionServerSpaceQuotaManager: Quota support disabled, not
> starting
> > space quota manager.
> > 2018-06-06 19:28:40,133 INFO  [regionserver/rhes75:16020]
> > wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
> > prefix=rhes75%2C16020%2C1528309715572, suffix=,
> > logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> > 8309715572, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> > 2018-06-06 19:28:40,251 ERROR [regionserver/rhes75:16020]
> > regionserver.HRegionServer: ***** ABORTING region server
> > rhes75,16020,1528309715572: Unhandled: Unable to find
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> > ***
> >
> > I cannot seem to be able to fix this even after removing hbase directory
> > from hdfs and zookeeper! Any ideas will be appreciated.
> >
> > Thanks
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Mich Talebzadeh <mi...@gmail.com>.
Thanks Sean. I downloaded Phoenix for Hbase version 2
(apache-phoenix-5.0.0-alpha-HBase-2.0-bin) but still the same error

2018-06-06 21:45:15,297 INFO  [regionserver/rhes75:16020]
wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
prefix=rhes75%2C16020%2C1528317910703, suffix=,
logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
8317910703, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
2018-06-06 21:45:15,414 ERROR [regionserver/rhes75:16020]
regionserver.HRegionServer: ***** ABORTING region server
rhes75,16020,1528317910703: Unhandled: Unable to find
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
***

*java.lang.UnsupportedOperationException: Unable to find
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec*        at
org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:47)
        at
org.apache.hadoop.hbase.regionserver.wal.WALCellCodec.create(WALCellCodec.java:112)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.getCodec(AbstractProtobufLogWriter.java:75)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.initAfterHeader0(AbstractProtobufLogWriter.java:184)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.initAfterHeader(AbstractProtobufLogWriter.java:192)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:174)
        at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:99)
        at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:612)
        at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:124)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:759)
        at
org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:489)
        at
org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<init>(AsyncFSWAL.java:251)
        at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:69)
        at
org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:44)
        at
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
        at
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
        at
org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:252)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2065)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1291)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1172)
        at
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:989)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:264)
        at
org.apache.hadoop.hbase.util.ReflectionUtils.instantiateWithCustomCtor(ReflectionUtils.java:43)
        ... 21 more
2018-06-06 21:45:15,415 ERROR [regionserver/rhes75:16020]
regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2018-06-06 21:45:15,420 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer:    "Verbose" : false,
    "ObjectPendingFinalizationCount" : 0,
    "NonHeapMemoryUsage" : {
      "committed" : 59793408,
      "init" : 2555904,
      "max" : -1,
      "used" : 58519176
    },
    "HeapMemoryUsage" : {
      "committed" : 1017708544,
      "init" : 1052770304,
      "max" : 16777347072,
      "used" : 255809352
    },
    "ObjectName" : "java.lang:type=Memory"
  } ],
  "beans" : [ {
    "name" : "Hadoop:service=HBase,name=RegionServer,sub=IPC",
    "modelerType" : "RegionServer,sub=IPC",
    "tag.Context" : "regionserver",
    "tag.Hostname" : "rhes75"
  } ],
  "beans" : [ {
    "name" : "Hadoop:service=HBase,name=RegionServer,sub=Replication",
    "modelerType" : "RegionServer,sub=Replication",
    "tag.Context" : "regionserver",
    "tag.Hostname" : "rhes75"
  } ],
  "beans" : [ {
    "name" : "Hadoop:service=HBase,name=RegionServer,sub=Server",
    "modelerType" : "RegionServer,sub=Server",
    "tag.Context" : "regionserver",
    "tag.Hostname" : "rhes75"
  } ]
}
2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: ***** STOPPING region server
'rhes75,16020,1528317910703' *****
2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: STOPPED: Unhandled: Unable to find
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec
2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
regionserver.SplitLogWorker: Sending interrupt to stop the worker thread
2018-06-06 21:45:15,430 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: Stopping infoServer
2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
regionserver.SplitLogWorker: SplitLogWorker interrupted. Exiting.
2018-06-06 21:45:15,430 INFO  [SplitLogWorker-rhes75:16020]
regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528317910703
exiting
2018-06-06 21:45:15,434 INFO  [regionserver/rhes75:16020]
handler.ContextHandler: Stopped o.e.j.w.WebAppContext@1e530163
{/,null,UNAVAILABLE}{file:/data6/hduser/hbase-2.0.0/hbase-webapps/regionserver}
2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
server.AbstractConnector: Stopped ServerConnector@5c60b0a0
{HTTP/1.1,[http/1.1]}{0.0.0.0:16030}
2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@39651a82
{/static,file:///data6/hduser/hbase-2.0.0/hbase-webapps/static/,UNAVAILABLE}
2018-06-06 21:45:15,436 INFO  [regionserver/rhes75:16020]
handler.ContextHandler: Stopped o.e.j.s.ServletContextHandler@70211e49
{/logs,file:///data6/hduser/hbase-2.0.0/logs/,UNAVAILABLE}
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
regionserver.HeapMemoryManager: Stopping
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
flush.RegionServerFlushTableProcedureManager: Stopping region server flush
procedure manager abruptly.
2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.1]
regionserver.MemStoreFlusher: MemStoreFlusher.1 exiting
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
snapshot.RegionServerSnapshotManager: Stopping RegionServerSnapshotManager
abruptly.
2018-06-06 21:45:15,437 INFO  [MemStoreFlusher.0]
regionserver.MemStoreFlusher: MemStoreFlusher.0 exiting
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: aborting server rhes75,16020,1528317910703
2018-06-06 21:45:15,437 INFO  [regionserver/rhes75:16020]
zookeeper.ReadOnlyZKClient: Close zookeeper connection 0x2a9ccc02 to
localhost:2181
2018-06-06 21:45:15,439 INFO  [regionserver/rhes75:16020]
regionserver.HRegionServer: stopping server rhes75,16020,1528317910703; all
regions closed.
2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
regionserver.Leases: Closed leases
2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020]
hbase.ChoreService: Chore service for: regionserver/rhes75:16020 had
[[ScheduledChore: Name: CompactionThroughputTuner Period: 60000 Unit:
MILLISECONDS], [ScheduledChore: Nam
e: CompactedHFilesCleaner Period: 120000 Unit: MILLISECONDS],
[ScheduledChore: Name: MovedRegionsCleaner for region
rhes75,16020,1528317910703 Period: 120000 Unit: MILLISECONDS],
[ScheduledChore: Name: MemstoreFlusherChore Period: 1
0000 Unit: MILLISECONDS]] on shutdown
2018-06-06 21:45:15,440 INFO  [regionserver/rhes75:16020.logRoller]
regionserver.LogRoller: LogRoller exiting.


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 6 June 2018 at 20:49, Sean Busbey <bu...@apache.org> wrote:

> IndexedWALEditCodec is a class from the Apache Phoenix project. your
> cluster must be configured to have Phoenix run but it can't find the
> jars for phoenix.
>
> user@phoenix.apache.org is probably your best bet for getting things
> going.
>
> On Wed, Jun 6, 2018 at 1:52 PM, Mich Talebzadeh
> <mi...@gmail.com> wrote:
> > Hi,
> >
> > I have an old Hbase hbase-1.2.3 that runs fine on both RHES 5.6 and RHES
> 7.5
> >
> > I created a new Hbase hbase-2.0.0 instance on RHES 7.5.
> >
> > I seem to have a problem with my region server as it fails to start
> > throwing error
> >
> > 2018-06-06 19:28:37,033 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: CompactionChecker runs every PT10S
> > 2018-06-06 19:28:37,071 INFO  [SplitLogWorker-rhes75:16020]
> > regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528309715572
> > starting
> > 2018-06-06 19:28:37,073 INFO  [regionserver/rhes75:16020]
> > regionserver.HeapMemoryManager: Starting, tuneOn=false
> > 2018-06-06 19:28:37,076 INFO  [regionserver/rhes75:16020]
> > regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk
> > size 2 MB, max count 2880, initial count 0
> > 2018-06-06 19:28:37,077 INFO  [regionserver/rhes75:16020]
> > regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk
> > size 204.80 KB, max count 3200, initial count 0
> > 2018-06-06 19:28:37,078 INFO  [ReplicationExecutor-0]
> > regionserver.ReplicationSourceManager: Current list of replicators:
> > [rhes75,16020,1528309715572] other RSs: [rhes75,16020,1528309715572]
> > 2018-06-06 19:28:37,099 INFO  [regionserver/rhes75:16020]
> > regionserver.HRegionServer: Serving as rhes75,16020,1528309715572,
> > RpcServer on rhes75/50.140.197.220:16020, sessionid=0x163d61b308c0033
> > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > quotas.RegionServerRpcQuotaManager: Quota support disabled
> > 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> > quotas.RegionServerSpaceQuotaManager: Quota support disabled, not
> starting
> > space quota manager.
> > 2018-06-06 19:28:40,133 INFO  [regionserver/rhes75:16020]
> > wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
> > prefix=rhes75%2C16020%2C1528309715572, suffix=,
> > logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> > 8309715572, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> > 2018-06-06 19:28:40,251 ERROR [regionserver/rhes75:16020]
> > regionserver.HRegionServer: ***** ABORTING region server
> > rhes75,16020,1528309715572: Unhandled: Unable to find
> > org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> > ***
> >
> > I cannot seem to be able to fix this even after removing hbase directory
> > from hdfs and zookeeper! Any ideas will be appreciated.
> >
> > Thanks
> >
> > Dr Mich Talebzadeh
> >
> >
> >
> > LinkedIn * https://www.linkedin.com/profile/view?id=
> AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> > <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCd
> OABUrV8Pw>*
> >
> >
> >
> > http://talebzadehmich.wordpress.com
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
>

Re: Problem starting region server with Hbase version hbase-2.0.0

Posted by Sean Busbey <bu...@apache.org>.
IndexedWALEditCodec is a class from the Apache Phoenix project. your
cluster must be configured to have Phoenix run but it can't find the
jars for phoenix.

user@phoenix.apache.org is probably your best bet for getting things going.

On Wed, Jun 6, 2018 at 1:52 PM, Mich Talebzadeh
<mi...@gmail.com> wrote:
> Hi,
>
> I have an old Hbase hbase-1.2.3 that runs fine on both RHES 5.6 and RHES 7.5
>
> I created a new Hbase hbase-2.0.0 instance on RHES 7.5.
>
> I seem to have a problem with my region server as it fails to start
> throwing error
>
> 2018-06-06 19:28:37,033 INFO  [regionserver/rhes75:16020]
> regionserver.HRegionServer: CompactionChecker runs every PT10S
> 2018-06-06 19:28:37,071 INFO  [SplitLogWorker-rhes75:16020]
> regionserver.SplitLogWorker: SplitLogWorker rhes75,16020,1528309715572
> starting
> 2018-06-06 19:28:37,073 INFO  [regionserver/rhes75:16020]
> regionserver.HeapMemoryManager: Starting, tuneOn=false
> 2018-06-06 19:28:37,076 INFO  [regionserver/rhes75:16020]
> regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk
> size 2 MB, max count 2880, initial count 0
> 2018-06-06 19:28:37,077 INFO  [regionserver/rhes75:16020]
> regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk
> size 204.80 KB, max count 3200, initial count 0
> 2018-06-06 19:28:37,078 INFO  [ReplicationExecutor-0]
> regionserver.ReplicationSourceManager: Current list of replicators:
> [rhes75,16020,1528309715572] other RSs: [rhes75,16020,1528309715572]
> 2018-06-06 19:28:37,099 INFO  [regionserver/rhes75:16020]
> regionserver.HRegionServer: Serving as rhes75,16020,1528309715572,
> RpcServer on rhes75/50.140.197.220:16020, sessionid=0x163d61b308c0033
> 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> quotas.RegionServerRpcQuotaManager: Quota support disabled
> 2018-06-06 19:28:37,100 INFO  [regionserver/rhes75:16020]
> quotas.RegionServerSpaceQuotaManager: Quota support disabled, not starting
> space quota manager.
> 2018-06-06 19:28:40,133 INFO  [regionserver/rhes75:16020]
> wal.AbstractFSWAL: WAL configuration: blocksize=256 MB, rollsize=128 MB,
> prefix=rhes75%2C16020%2C1528309715572, suffix=,
> logDir=hdfs://rhes75:9000/hbase/WALs/rhes75,16020,152
> 8309715572, archiveDir=hdfs://rhes75:9000/hbase/oldWALs
> 2018-06-06 19:28:40,251 ERROR [regionserver/rhes75:16020]
> regionserver.HRegionServer: ***** ABORTING region server
> rhes75,16020,1528309715572: Unhandled: Unable to find
> org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec **
> ***
>
> I cannot seem to be able to fix this even after removing hbase directory
> from hdfs and zookeeper! Any ideas will be appreciated.
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.