You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2023/01/13 08:23:24 UTC

[GitHub] [hudi] gtwuser opened a new issue, #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

gtwuser opened a new issue, #7665:
URL: https://github.com/apache/hudi/issues/7665

   **Describe the problem you faced**
   
   A clear and concise description of the problem.
   
   We are facing below issue with AWS Glue using below locking configs and just 2 writers. 
   ```bash
   Caused by: org.apache.hudi.exception.HoodieException: Unable to acquire lock, lock object null
   ```
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. Setup glue job with Hudi using Glue 4.0
   2. Add code in pyspark to perform upsert operation 
   3. Use below mentioned configs:
   Locking Configs : 
   ```bash
               'hoodie.write.concurrency.mode': 'optimistic_concurrency_control',
               'hoodie.cleaner.policy.failed.writes': 'LAZY',
               'hoodie.bulkinsert.shuffle.parallelism': 2000,
               'hoodie.write.lock.hivemetastore.database': database,
               'hoodie.write.lock.hivemetastore.table': table_name,
               'hoodie.write.lock.provider': 'org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider',
               'hoodie.write.lock.client.wait_time_ms_between_retry': 50000,
               'hoodie.write.lock.wait_time_ms_between_retry': 20000,
               'hoodie.write.lock.wait_time_ms': 60000,
               'hoodie.write.lock.client.num_retries': 15
   ```
   4.  common configs
   ```bash
               'className': ORG_APACHE_HUDI,
               'hoodie.datasource.hive_sync.use_jdbc': 'false',
               'hoodie.datasource.write.precombine.field': 'ts',
               'hoodie.datasource.write.recordkey.field': record_key,
               'hoodie.table.name': 'my_table',
               'hoodie.datasource.hive_sync.mode': 'hms',
               'hoodie.consistency.check.enabled': 'false',
               'hoodie.datasource.hive_sync.database': database,
               'hoodie.datasource.write.reconcile.schema': 'true',
               'hoodie.datasource.hive_sync.table': table_name,
               'hoodie.datasource.hive_sync.enable': 'true',
   ```
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   We expect the incoming record to be saved successfully in Hudi table
   **Environment Description**
   
   * Hudi version : 0.12.2
   
   * Spark version : 3.3
   
   * Storage (HDFS/S3/GCS..) : S3
   
   * Running on Docker? (yes/no) : no
   
   
   **Additional context**
   
   Add any other context about the problem here.
   We are using Glue jobs to run hudi script with Glue 4.0 version
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   Full Stack trace
   ```bash
   2023-01-13 06:24:18,784,784 ERROR    [cxdl4-ldf-kkj-sbx.py:310] error ingesting data: An error occurred while calling o174.save.
   : org.apache.spark.SparkException: Writing job failed.
   	at org.apache.spark.sql.errors.QueryExecutionErrors$.writingJobFailedError(QueryExecutionErrors.scala:742)
   	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:404)
   	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:353)
   	at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.writeWithV2(WriteToDataSourceV2Exec.scala:244)
   	at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:332)
   	at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run$(WriteToDataSourceV2Exec.scala:331)
   	at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.run(WriteToDataSourceV2Exec.scala:244)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)
   	at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)
   	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:103)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
   	at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:114)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$7(SQLExecution.scala:139)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:139)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:138)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
   	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:100)
   	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:96)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:615)
   	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:177)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:615)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:591)
   	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:96)
   	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:83)
   	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:81)
   	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:124)
   	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:860)
   	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:311)
   	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:247)
   	at org.apache.hudi.HoodieSparkSqlWriter$.bulkInsertAsRow(HoodieSparkSqlWriter.scala:592)
   	at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:180)
   	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:144)
   	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
   	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:103)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
   	at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:114)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$7(SQLExecution.scala:139)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:224)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:139)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:245)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:138)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
   	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:100)
   	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:96)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:615)
   	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:177)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:615)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
   	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:591)
   	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:96)
   	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:83)
   	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:81)
   	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:124)
   	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:860)
   	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:390)
   	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:363)
   	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:247)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
   	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
   	at py4j.Gateway.invoke(Gateway.java:282)
   	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
   	at py4j.commands.CallCommand.execute(CallCommand.java:79)
   	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
   	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
   	at java.lang.Thread.run(Thread.java:750)
   Caused by: org.apache.hudi.exception.HoodieException: Unable to acquire lock, lock object null
   	at org.apache.hudi.internal.DataSourceInternalWriterHelper.commit(DataSourceInternalWriterHelper.java:87)
   	at org.apache.hudi.spark3.internal.HoodieDataSourceInternalBatchWrite.commit(HoodieDataSourceInternalBatchWrite.java:93)
   	at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:392)
   	... 88 more
   	Suppressed: org.apache.hudi.exception.HoodieLockException: Unable to acquire lock, lock object null
   		at org.apache.hudi.client.transaction.lock.LockManager.lock(LockManager.java:84)
   		at org.apache.hudi.client.transaction.TransactionManager.beginTransaction(TransactionManager.java:53)
   		at org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1452)
   		at org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1487)
   		at org.apache.hudi.client.BaseHoodieWriteClient.rollback(BaseHoodieWriteClient.java:729)
   		at org.apache.hudi.internal.DataSourceInternalWriterHelper.abort(DataSourceInternalWriterHelper.java:95)
   		at org.apache.hudi.spark3.internal.HoodieDataSourceInternalBatchWrite.abort(HoodieDataSourceInternalBatchWrite.java:98)
   		at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:399)
   		... 88 more
   	Caused by: org.apache.hudi.exception.HoodieLockException: FAILED_TO_ACQUIRE lock at database temp_db_usw2 and table tmp_table
   		at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.tryLock(HiveMetastoreBasedLockProvider.java:115)
   		at org.apache.hudi.client.transaction.lock.LockManager.lock(LockManager.java:73)
   		... 95 more
   	Caused by: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException: lock is not supported
   		at java.util.concurrent.FutureTask.report(FutureTask.java:122)
   		at java.util.concurrent.FutureTask.get(FutureTask.java:206)
   		at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.acquireLockInternal(HiveMetastoreBasedLockProvider.java:187)
   		at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.acquireLock(HiveMetastoreBasedLockProvider.java:140)
   		at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.tryLock(HiveMetastoreBasedLockProvider.java:113)
   		... 96 more
   	Caused by: java.lang.UnsupportedOperationException: lock is not supported
   		at com.amazonaws.glue.catalog.metastore.GlueMetastoreClientDelegate.lock(GlueMetastoreClientDelegate.java:1808)
   		at com.amazonaws.glue.catalog.metastore.AWSCatalogMetastoreClient.lock(AWSCatalogMetastoreClient.java:1302)
   		at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.lambda$acquireLockInternal$0(HiveMetastoreBasedLockProvider.java:186)
   		at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   		... 1 more
   Caused by: org.apache.hudi.exception.HoodieLockException: Unable to acquire lock, lock object null
   	at org.apache.hudi.client.transaction.lock.LockManager.lock(LockManager.java:84)
   	at org.apache.hudi.client.transaction.TransactionManager.beginTransaction(TransactionManager.java:53)
   	at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:231)
   	at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:215)
   	at org.apache.hudi.internal.DataSourceInternalWriterHelper.commit(DataSourceInternalWriterHelper.java:84)
   	... 90 more
   Caused by: org.apache.hudi.exception.HoodieLockException: FAILED_TO_ACQUIRE lock at database temp_db_usw2 and table tmp_table
   	at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.tryLock(HiveMetastoreBasedLockProvider.java:115)
   	at org.apache.hudi.client.transaction.lock.LockManager.lock(LockManager.java:73)
   	... 94 more
   Caused by: java.util.concurrent.ExecutionException: java.lang.UnsupportedOperationException: lock is not supported
   	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
   	at java.util.concurrent.FutureTask.get(FutureTask.java:206)
   	at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.acquireLockInternal(HiveMetastoreBasedLockProvider.java:187)
   	at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.acquireLock(HiveMetastoreBasedLockProvider.java:140)
   	at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.tryLock(HiveMetastoreBasedLockProvider.java:113)
   	... 95 more
   Caused by: java.lang.UnsupportedOperationException: lock is not supported
   	at com.amazonaws.glue.catalog.metastore.GlueMetastoreClientDelegate.lock(GlueMetastoreClientDelegate.java:1808)
   	at com.amazonaws.glue.catalog.metastore.AWSCatalogMetastoreClient.lock(AWSCatalogMetastoreClient.java:1302)
   	at org.apache.hudi.hive.transaction.lock.HiveMetastoreBasedLockProvider.lambda$acquireLockInternal$0(HiveMetastoreBasedLockProvider.java:186)
   	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   	... 1 more
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] danny0405 commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
danny0405 commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1381506280

   Dear @umehrot2 , would you mind giving some insights here, seems the Glue catalog does not support hive table lock right?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] gtwuser commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
gtwuser commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1384315355

   > So i guess this issue is solved? cc @gtwuser
   
   No actually the actual question was about Hive based locking. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] nsivabalan commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by "nsivabalan (via GitHub)" <gi...@apache.org>.
nsivabalan commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1400630274

   Closing the issue as there is no AI from hudi side for the ask. thanks! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] gtwuser commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
gtwuser commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1382682998

   Along with this also wanted to check on this scenario: 
   1. Say if there are 2 writers on the same table
   2. writer one writing `{"id":1, "ts":2023-01-14:00:00:30}` \\ this is latest record
   3. writer two writing `{"id":1, "ts":2023-01-14:00:00:00}` \\ this is older one
   4. What gets written if writer 1 is acquiring the lock first


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] kazdy commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
kazdy commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1382751113

   @gtwuser you can use one ddb table for many hudi tables. By default DDB lock will use hudi table name as a partition key in ddb. See: https://hudi.apache.org/docs/next/configurations/#hoodiewritelockdynamodbpartition_key


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] danny0405 commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
danny0405 commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1384937406

   Nice ping @umehrot2, it's grateful if you can give some help.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] danny0405 commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
danny0405 commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1383764042

   So i guess this issue is solved? cc @gtwuser 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] nsivabalan commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
nsivabalan commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1396373333

   thanks @kazdy for the update. @gtwuser : since this is on aws land, are we good to close out this issue as hudi as such don't have much to do here. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] nsivabalan closed issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by "nsivabalan (via GitHub)" <gi...@apache.org>.
nsivabalan closed issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking
URL: https://github.com/apache/hudi/issues/7665


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] kazdy commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
kazdy commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1387471175

   Glue does not support hive transactions/ locks, see: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hive-metastore-glue.html
   Glue client does not implement methods used for locking:
   https://github.com/awslabs/aws-glue-data-catalog-client-for-apache-hive-metastore/blob/83e728b47655a8592efe6441213da1c8a608f18a/aws-glue-datacatalog-client-common/src/main/java/com/amazonaws/glue/catalog/metastore/GlueMetastoreClientDelegate.java#L1301


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] gtwuser commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
gtwuser commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1381470010

   @nsivabalan @umehrot2 @yihua please provide some help, it didnt help with the timeout increments as you may observe from configs. Kindly provide some info on this. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] gtwuser commented on issue #7665: [SUPPORT] lock not supported error in OCC for Hive based locking

Posted by GitBox <gi...@apache.org>.
gtwuser commented on issue #7665:
URL: https://github.com/apache/hudi/issues/7665#issuecomment-1381488343

   Also curious to know if the Hive DB and table are not pre-created already wont Hudi create it itself, just as the DynamoDb is created. ? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org