You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "Francisco Pareja (JIRA)" <ji...@apache.org> on 2017/06/14 11:30:00 UTC

[jira] [Commented] (PHOENIX-3721) CSV bulk load doesn't work well with SYSTEM.MUTEX

    [ https://issues.apache.org/jira/browse/PHOENIX-3721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16049074#comment-16049074 ] 

Francisco Pareja commented on PHOENIX-3721:
-------------------------------------------

Hello,

I have come across this "MUTEX" issue as well when "upserting" a records into a table from Spark streaming (multiple executers) using a JDBC connector.
If I execute an insertion on its own, the error is not there.

I am using 4.10 in the server and 4.10 in the client.

I tried using [~aroberts@fuze.com] suggestion of connecting 4.9 client to the server but in this case when although it does not error, it only inserts data into the columns that are part of the PRimary Key and leaves all the other columns blank. It does this even when I execute the insertion on its own. Again no error but NULL values instead of the data in all columns except for the ones that are part of the PK.

Hope this helps.

> CSV bulk load doesn't work well with SYSTEM.MUTEX
> -------------------------------------------------
>
>                 Key: PHOENIX-3721
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3721
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.10.0
>            Reporter: Sergey Soldatov
>            Priority: Blocker
>
> This is quite strange. I'm using HBase 1.2.4 and current master branch.
> During the running CSV bulk load in the regular way I got the following exception: 
> {noformat}
> xception in thread "main" java.sql.SQLException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException): SYSTEM.MUTEX
> 	at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2465)
> 	at org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2382)
> 	at org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
> 	at org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2382)
> 	at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
> 	at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:149)
> 	at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
> 	at java.sql.DriverManager.getConnection(DriverManager.java:664)
> 	at java.sql.DriverManager.getConnection(DriverManager.java:208)
> 	at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:337)
> 	at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:329)
> 	at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.loadData(AbstractBulkLoadTool.java:209)
> 	at org.apache.phoenix.mapreduce.AbstractBulkLoadTool.run(AbstractBulkLoadTool.java:183)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> 	at org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:109)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> 	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException): SYSTEM.MUTEX
> 	at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.prepareCreate(CreateTableProcedure.java:285)
> 	at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:106)
> 	at org.apache.hadoop.hbase.master.procedure.CreateTableProcedure.executeFromState(CreateTableProcedure.java:58)
> 	at org.apache.hadoop.hbase.procedure2.StateMachineProcedure.execute(StateMachineProcedure.java:119)
> 	at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:498)
> 	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1061)
> 	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:856)
> 	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execLoop(ProcedureExecutor.java:809)
> 	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.access$400(ProcedureExecutor.java:75)
> 	at org.apache.hadoop.hbase.procedure2.ProcedureExecutor$2.run(ProcedureExecutor.java:495)
> {noformat}
> Checked the code and it seems that the problem is in the createSysMutexTable function. Its expect TableExistsException (and skip it), but in my case the exception is wrapped by RemoteException, so it's not skipped and the init fails. The easy fix is to handle RemoteException and check that it wraps TableExistsException, but it looks a bit  ugly.  
> [~jamestaylor] [~samarthjain] any thoughts? 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)