You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by ankit beohar <an...@gmail.com> on 2016/07/29 03:15:36 UTC

Hbase USERT

Hi Hbase,

My use case is :- I am getting files and I want to insert the records in
hbase with rowkey if rowkey available I have to update the values with
old+new values.

For this I wrote MR job and get the values of each rowkey and in If else I
manage my update and insert but with only 0.1 millions records hbase region
server goes down.

Any idea on this?

I tried to incorporate Phoenix upsert also but with this same error occurs.

Please help me out this.

Best Regards,
ANKIT BEOHAR

RE: Hbase USERT

Posted by "Du, Jingcheng" <ji...@intel.com>.
This is because the HDFS cluster is not started successfully. You can check the namenode/datanode logs for further information.

Regards,
Jingcheng

-----Original Message-----
From: ankit beohar [mailto:ankitbeohar90@gmail.com] 
Sent: Friday, July 29, 2016 1:06 PM
To: dev@hbase.apache.org
Cc: user@hbase.apache.org
Subject: Re: Hbase USERT

Hi All,

We will monitor memory usages meanwhile below are the error:-

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/quickstart.cloudera,60020,1469713295051/quickstart.cloudera%2C60020%2C1469713295051..meta.1469766981975.meta
could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

        at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)

        at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)

        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)



        at org.apache.hadoop.ipc.Client.call(Client.java:1471)

        at org.apache.hadoop.ipc.Client.call(Client.java:1408)

        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)

        at com.sun.proxy.$Proxy22.addBlock(Unknown Source)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:404)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)

        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)

        at com.sun.proxy.$Proxy23.addBlock(Unknown Source)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at
org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)

        at com.sun.proxy.$Proxy24.addBlock(Unknown Source)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1704)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1500)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:668)

2016-07-28 21:37:49,143 WARN
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Failed to write trailer, non-fatal, continuing...

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/hbase/WALs/quickstart.cloudera,60020,1469713295051/quickstart.cloudera%2C60020%2C1469713295051.null0.1469766981975
could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

        at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)

        at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)




Best Regards,
ANKIT BEOHAR


On Fri, Jul 29, 2016 at 9:10 AM, Du, Jingcheng <ji...@intel.com>
wrote:

> Hi Ankit,
>
> It seems like a memory issue in region servers. Did you monitor the 
> memory usage in region servers during the run? How about to increase the heap size?
> Do you get exceptions after the region server goes down? Mind sharing 
> it here?
>
> Regards,
> Jingcheng
>
> -----Original Message-----
> From: Dima Spivak [mailto:dspivak@cloudera.com]
> Sent: Friday, July 29, 2016 11:32 AM
> To: user@hbase.apache.org
> Subject: Re: Hbase USERT
>
> Hey Ankit,
>
> Moving the dev list to bcc and adding the user mailing list as the 
> recipient. Maybe a fellow user can offer some suggestions.
>
> All the best,
>   Dima
>
> On Thursday, July 28, 2016, ankit beohar <an...@gmail.com> wrote:
>
> > Hi Hbase,
> >
> > My use case is :- I am getting files and I want to insert the 
> > records in hbase with rowkey if rowkey available I have to update 
> > the values with
> > old+new values.
> >
> > For this I wrote MR job and get the values of each rowkey and in If 
> > else I manage my update and insert but with only 0.1 millions 
> > records hbase region server goes down.
> >
> > Any idea on this?
> >
> > I tried to incorporate Phoenix upsert also but with this same error
> occurs.
> >
> > Please help me out this.
> >
> > Best Regards,
> > ANKIT BEOHAR
> >
>

RE: Hbase USERT

Posted by "Du, Jingcheng" <ji...@intel.com>.
This is because the HDFS cluster is not started successfully. You can check the namenode/datanode logs for further information.

Regards,
Jingcheng

-----Original Message-----
From: ankit beohar [mailto:ankitbeohar90@gmail.com] 
Sent: Friday, July 29, 2016 1:06 PM
To: dev@hbase.apache.org
Cc: user@hbase.apache.org
Subject: Re: Hbase USERT

Hi All,

We will monitor memory usages meanwhile below are the error:-

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/WALs/quickstart.cloudera,60020,1469713295051/quickstart.cloudera%2C60020%2C1469713295051..meta.1469766981975.meta
could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

        at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)

        at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)

        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)



        at org.apache.hadoop.ipc.Client.call(Client.java:1471)

        at org.apache.hadoop.ipc.Client.call(Client.java:1408)

        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)

        at com.sun.proxy.$Proxy22.addBlock(Unknown Source)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:404)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)

        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)

        at com.sun.proxy.$Proxy23.addBlock(Unknown Source)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at
org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)

        at com.sun.proxy.$Proxy24.addBlock(Unknown Source)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1704)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1500)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:668)

2016-07-28 21:37:49,143 WARN
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Failed to write trailer, non-fatal, continuing...

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/hbase/WALs/quickstart.cloudera,60020,1469713295051/quickstart.cloudera%2C60020%2C1469713295051.null0.1469766981975
could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

        at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)

        at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)




Best Regards,
ANKIT BEOHAR


On Fri, Jul 29, 2016 at 9:10 AM, Du, Jingcheng <ji...@intel.com>
wrote:

> Hi Ankit,
>
> It seems like a memory issue in region servers. Did you monitor the 
> memory usage in region servers during the run? How about to increase the heap size?
> Do you get exceptions after the region server goes down? Mind sharing 
> it here?
>
> Regards,
> Jingcheng
>
> -----Original Message-----
> From: Dima Spivak [mailto:dspivak@cloudera.com]
> Sent: Friday, July 29, 2016 11:32 AM
> To: user@hbase.apache.org
> Subject: Re: Hbase USERT
>
> Hey Ankit,
>
> Moving the dev list to bcc and adding the user mailing list as the 
> recipient. Maybe a fellow user can offer some suggestions.
>
> All the best,
>   Dima
>
> On Thursday, July 28, 2016, ankit beohar <an...@gmail.com> wrote:
>
> > Hi Hbase,
> >
> > My use case is :- I am getting files and I want to insert the 
> > records in hbase with rowkey if rowkey available I have to update 
> > the values with
> > old+new values.
> >
> > For this I wrote MR job and get the values of each rowkey and in If 
> > else I manage my update and insert but with only 0.1 millions 
> > records hbase region server goes down.
> >
> > Any idea on this?
> >
> > I tried to incorporate Phoenix upsert also but with this same error
> occurs.
> >
> > Please help me out this.
> >
> > Best Regards,
> > ANKIT BEOHAR
> >
>

Re: Hbase USERT

Posted by ankit beohar <an...@gmail.com>.
Hi All,

We will monitor memory usages meanwhile below are the error:-

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/hbase/WALs/quickstart.cloudera,60020,1469713295051/quickstart.cloudera%2C60020%2C1469713295051..meta.1469766981975.meta
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 0 datanode(s) running and no node(s) are excluded in this operation.

        at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)

        at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)

        at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:415)

        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)



        at org.apache.hadoop.ipc.Client.call(Client.java:1471)

        at org.apache.hadoop.ipc.Client.call(Client.java:1408)

        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)

        at com.sun.proxy.$Proxy22.addBlock(Unknown Source)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:404)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)

        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)

        at com.sun.proxy.$Proxy23.addBlock(Unknown Source)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:606)

        at
org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)

        at com.sun.proxy.$Proxy24.addBlock(Unknown Source)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1704)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1500)

        at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:668)

2016-07-28 21:37:49,143 WARN
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter: Failed to write
trailer, non-fatal, continuing...

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/hbase/WALs/quickstart.cloudera,60020,1469713295051/quickstart.cloudera%2C60020%2C1469713295051.null0.1469766981975
could only be replicated to 0 nodes instead of minReplication (=1).  There
are 0 datanode(s) running and no node(s) are excluded in this operation.

        at
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)

        at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)

        at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)

        at
org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)




Best Regards,
ANKIT BEOHAR


On Fri, Jul 29, 2016 at 9:10 AM, Du, Jingcheng <ji...@intel.com>
wrote:

> Hi Ankit,
>
> It seems like a memory issue in region servers. Did you monitor the memory
> usage in region servers during the run? How about to increase the heap size?
> Do you get exceptions after the region server goes down? Mind sharing it
> here?
>
> Regards,
> Jingcheng
>
> -----Original Message-----
> From: Dima Spivak [mailto:dspivak@cloudera.com]
> Sent: Friday, July 29, 2016 11:32 AM
> To: user@hbase.apache.org
> Subject: Re: Hbase USERT
>
> Hey Ankit,
>
> Moving the dev list to bcc and adding the user mailing list as the
> recipient. Maybe a fellow user can offer some suggestions.
>
> All the best,
>   Dima
>
> On Thursday, July 28, 2016, ankit beohar <an...@gmail.com> wrote:
>
> > Hi Hbase,
> >
> > My use case is :- I am getting files and I want to insert the records
> > in hbase with rowkey if rowkey available I have to update the values
> > with
> > old+new values.
> >
> > For this I wrote MR job and get the values of each rowkey and in If
> > else I manage my update and insert but with only 0.1 millions records
> > hbase region server goes down.
> >
> > Any idea on this?
> >
> > I tried to incorporate Phoenix upsert also but with this same error
> occurs.
> >
> > Please help me out this.
> >
> > Best Regards,
> > ANKIT BEOHAR
> >
>

RE: Hbase USERT

Posted by "Du, Jingcheng" <ji...@intel.com>.
Hi Ankit,

It seems like a memory issue in region servers. Did you monitor the memory usage in region servers during the run? How about to increase the heap size?
Do you get exceptions after the region server goes down? Mind sharing it here?

Regards,
Jingcheng

-----Original Message-----
From: Dima Spivak [mailto:dspivak@cloudera.com] 
Sent: Friday, July 29, 2016 11:32 AM
To: user@hbase.apache.org
Subject: Re: Hbase USERT

Hey Ankit,

Moving the dev list to bcc and adding the user mailing list as the recipient. Maybe a fellow user can offer some suggestions.

All the best,
  Dima

On Thursday, July 28, 2016, ankit beohar <an...@gmail.com> wrote:

> Hi Hbase,
>
> My use case is :- I am getting files and I want to insert the records 
> in hbase with rowkey if rowkey available I have to update the values 
> with
> old+new values.
>
> For this I wrote MR job and get the values of each rowkey and in If 
> else I manage my update and insert but with only 0.1 millions records 
> hbase region server goes down.
>
> Any idea on this?
>
> I tried to incorporate Phoenix upsert also but with this same error occurs.
>
> Please help me out this.
>
> Best Regards,
> ANKIT BEOHAR
>

RE: Hbase USERT

Posted by "Du, Jingcheng" <ji...@intel.com>.
Hi Ankit,

It seems like a memory issue in region servers. Did you monitor the memory usage in region servers during the run? How about to increase the heap size?
Do you get exceptions after the region server goes down? Mind sharing it here?

Regards,
Jingcheng

-----Original Message-----
From: Dima Spivak [mailto:dspivak@cloudera.com] 
Sent: Friday, July 29, 2016 11:32 AM
To: user@hbase.apache.org
Subject: Re: Hbase USERT

Hey Ankit,

Moving the dev list to bcc and adding the user mailing list as the recipient. Maybe a fellow user can offer some suggestions.

All the best,
  Dima

On Thursday, July 28, 2016, ankit beohar <an...@gmail.com> wrote:

> Hi Hbase,
>
> My use case is :- I am getting files and I want to insert the records 
> in hbase with rowkey if rowkey available I have to update the values 
> with
> old+new values.
>
> For this I wrote MR job and get the values of each rowkey and in If 
> else I manage my update and insert but with only 0.1 millions records 
> hbase region server goes down.
>
> Any idea on this?
>
> I tried to incorporate Phoenix upsert also but with this same error occurs.
>
> Please help me out this.
>
> Best Regards,
> ANKIT BEOHAR
>

Re: Hbase USERT

Posted by Dima Spivak <ds...@cloudera.com>.
Hey Ankit,

Moving the dev list to bcc and adding the user mailing list as the
recipient. Maybe a fellow user can offer some suggestions.

All the best,
  Dima

On Thursday, July 28, 2016, ankit beohar <an...@gmail.com> wrote:

> Hi Hbase,
>
> My use case is :- I am getting files and I want to insert the records in
> hbase with rowkey if rowkey available I have to update the values with
> old+new values.
>
> For this I wrote MR job and get the values of each rowkey and in If else I
> manage my update and insert but with only 0.1 millions records hbase region
> server goes down.
>
> Any idea on this?
>
> I tried to incorporate Phoenix upsert also but with this same error occurs.
>
> Please help me out this.
>
> Best Regards,
> ANKIT BEOHAR
>

Re: Hbase USERT

Posted by Dima Spivak <ds...@cloudera.com>.
Hey Ankit,

Moving the dev list to bcc and adding the user mailing list as the
recipient. Maybe a fellow user can offer some suggestions.

All the best,
  Dima

On Thursday, July 28, 2016, ankit beohar <an...@gmail.com> wrote:

> Hi Hbase,
>
> My use case is :- I am getting files and I want to insert the records in
> hbase with rowkey if rowkey available I have to update the values with
> old+new values.
>
> For this I wrote MR job and get the values of each rowkey and in If else I
> manage my update and insert but with only 0.1 millions records hbase region
> server goes down.
>
> Any idea on this?
>
> I tried to incorporate Phoenix upsert also but with this same error occurs.
>
> Please help me out this.
>
> Best Regards,
> ANKIT BEOHAR
>