You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2022/04/11 14:32:56 UTC

[GitHub] [hudi] easonwood opened a new issue, #5290: [SUPPORT] Problems in handling column deletions in Hudi

easonwood opened a new issue, #5290:
URL: https://github.com/apache/hudi/issues/5290

   
   I have a spark job with such pipeline:
   mysql binlog data -> parquet files on S3 -> hudi (external tables on S3).
   
   Codes Like This:
   val dataDF = spark.read.option("mergeSchema","true").parquet(parquetPaths:_*)
   val hudiOptions = {"hoodie.metadata.enable" -> "true", "hoodie.datasource.write.operation" ->"upsert", "hoodie.datasource.write.table.type"->"COPY_ON_WRITE" }
   dataDF
         .write
         .format("org.apache.hudi")
         .options(hudiOptions).                    
         .mode(SaveMode.Append)
         .save(tablePath)
   
   The task runs well before it faced with the problem of mysql column deletion.  I got the error:
   Caused by: org.apache.parquet.io.InvalidRecordException: Parquet/Avro schema mismatch: Avro field 'col1' not found
   
   To handle it, I used the method below:
   https://hudi.apache.org/docs/troubleshooting/#caused-by-orgapacheparquetioinvalidrecordexception-parquetavro-schema-mismatch-avro-field-col1-not-found
   
   And changed the code to:
   val dataDF = spark.read.schema(tableSchema).parquet(pathsNeedConsume:_*)
   -- tableSchema contains the deleted old columns.
   -- using dataDF.show here can see the schema and data are compatible, the deleted columns set to null
   val hudiOptions = {"hoodie.metadata.enable" -> "true", "hoodie.datasource.write.operation" ->"upsert", "hoodie.datasource.write.table.type"->"COPY_ON_WRITE" }
   dataDF
         .write
         .format("org.apache.hudi")
         .options(hudiOptions).                    
         .mode(SaveMode.Append)
         .save(tablePath)
   -- save returns the error:
   22/04/11 10:32:17 ERROR BaseTableMetadata: Failed to retrieve files in partition (..... tablePath .....) from metadata
   org.apache.hudi.exception.HoodieMetadataException: Metadata record for partition db_cluster=qa01 is inconsistent: HoodieMetadataPayload {key=db_cluster=qa01, type=2, creations=[5969c7b9-a1f5-4bcb-8382-8809ec0cd067-0_0-85-1188_20220411102336.parquet, 5969c7b9-a1f5-4bcb-8382-8809ec0cd067-0_0-85-1195_20220411101502.parquet, 5969c7b9-a1f5-4bcb-8382-8809ec0cd067-0_0-87-537_20220411092820.parquet ....... a lot parquet in tablePath], deletions=[5969c7b9-a1f5-4bcb-8382-8809ec0cd067-0_0-29-245_20220331071910.parquet, 5969c7b9-a1f5-4bcb-8382-8809ec0cd067-0_0-29-246_20220331071910.parquet .......... a lot other parquet in tablePath], }
   	at org.apache.hudi.metadata.BaseTableMetadata.fetchAllFilesInPartition(BaseTableMetadata.java:212)
   	at org.apache.hudi.metadata.BaseTableMetadata.getAllFilesInPartition(BaseTableMetadata.java:130)
   	at org.apache.hudi.metadata.HoodieMetadataFileSystemView.listPartition(HoodieMetadataFileSystemView.java:65)
   	at org.apache.hudi.common.table.view.AbstractTableFileSystemView.lambda$ensurePartitionLoadedCorrectly$9(AbstractTableFileSystemView.java:280)
   	at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
   	at org.apache.hudi.common.table.view.AbstractTableFileSystemView.ensurePartitionLoadedCorrectly(AbstractTableFileSystemView.java:269)
   	at org.apache.hudi.common.table.view.AbstractTableFileSystemView.getLatestBaseFilesBeforeOrOn(AbstractTableFileSystemView.java:455)
   	at org.apache.hudi.timeline.service.handlers.BaseFileHandler.getLatestDataFilesBeforeOrOn(BaseFileHandler.java:57)
   	at org.apache.hudi.timeline.service.RequestHandler.lambda$registerDataFilesAPI$6(RequestHandler.java:239)
   	at org.apache.hudi.timeline.service.RequestHandler$ViewHandler.handle(RequestHandler.java:430)
   	at io.javalin.core.security.SecurityUtil.noopAccessManager(SecurityUtil.kt:22)
   	at io.javalin.http.JavalinServlet$addHandler$protectedHandler$1.handle(JavalinServlet.kt:116)
   	at io.javalin.http.JavalinServlet$service$2$1.invoke(JavalinServlet.kt:45)
   	at io.javalin.http.JavalinServlet$service$2$1.invoke(JavalinServlet.kt:24)
   	at io.javalin.http.JavalinServlet$service$1.invoke(JavalinServlet.kt:123)
   	at io.javalin.http.JavalinServlet$service$2.invoke(JavalinServlet.kt:40)
   	at io.javalin.http.JavalinServlet.service(JavalinServlet.kt:75)
   	at javax.servlet.http.HttpServlet.service(HttpServlet.java:590)
   	at org.apache.hudi.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)
   	at org.apache.hudi.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
   	at org.apache.hudi.org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
   	at io.javalin.core.JavalinServer$start$httpHandler$1.doHandle(JavalinServer.kt:53)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
   	at org.apache.hudi.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)
   	at org.apache.hudi.org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:59)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
   	at org.apache.hudi.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
   	at org.apache.hudi.org.eclipse.jetty.server.Server.handle(Server.java:494)
   	at org.apache.hudi.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)
   	at org.apache.hudi.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)
   	at org.apache.hudi.org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
   	at org.apache.hudi.org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
   	at org.apache.hudi.org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
   	at org.apache.hudi.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)
   	at org.apache.hudi.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)
   	at java.lang.Thread.run(Thread.java:750)
   
   
   
   * Hudi version : 0.8
   
   * Spark version : 3.1.2
   
   * Storage: S3
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] easonwood commented on issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
easonwood commented on issue #5290:
URL: https://github.com/apache/hudi/issues/5290#issuecomment-1095134209

   The error is similar to:
   https://github.com/apache/hudi/issues/3297


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] easonwood commented on issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
easonwood commented on issue #5290:
URL: https://github.com/apache/hudi/issues/5290#issuecomment-1110836451

   @codope 
   @nsivabalan 
   I didn't try disabling the metadata.
   I set this config:
   "hoodie.fail.on.timeline.archiving" -> "false"
   And the task runs well now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] nsivabalan commented on issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
nsivabalan commented on issue #5290:
URL: https://github.com/apache/hudi/issues/5290#issuecomment-1110469721

   @easonwood : do you have any further updates for Sagar's clarification above. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] easonwood commented on issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
easonwood commented on issue #5290:
URL: https://github.com/apache/hudi/issues/5290#issuecomment-1097581271

   Found such empty files: 20220411034143.rollback.inflight
   After deleting this, 
   Job still runs and Data are loaded to Hudi successfully.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] yihua commented on issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
yihua commented on issue #5290:
URL: https://github.com/apache/hudi/issues/5290#issuecomment-1111767712

   @easonwood Great that it's working fine now.  We have fixed a few issues around empty instant files and metadata table update logic since Hudi 0.8.0. You can also give the latest master a try with metadata table enabled.  I'll close this issue now.  Feel free to reopen the issue if you see more problems.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] easonwood commented on issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
easonwood commented on issue #5290:
URL: https://github.com/apache/hudi/issues/5290#issuecomment-1097571587

   Sometimes, the error continues like this and blocks the job:
   22/04/13 03:45:29 ERROR HoodieTimelineArchiveLog: Failed to archive commits, .commit file: 20220411034143.rollback.inflight
   java.io.IOException: Not an Avro data file
   	at org.apache.avro.file.DataFileReader.openReader(DataFileReader.java:50)
   	at org.apache.hudi.common.table.timeline.TimelineMetadataUtils.deserializeAvroMetadata(TimelineMetadataUtils.java:175)
   	at org.apache.hudi.client.utils.MetadataConversionUtils.createMetaWrapper(MetadataConversionUtils.java:84)
   	at org.apache.hudi.table.HoodieTimelineArchiveLog.convertToAvroRecord(HoodieTimelineArchiveLog.java:370)
   	at org.apache.hudi.table.HoodieTimelineArchiveLog.archive(HoodieTimelineArchiveLog.java:311)
   	at org.apache.hudi.table.HoodieTimelineArchiveLog.archiveIfRequired(HoodieTimelineArchiveLog.java:128)
   	at org.apache.hudi.client.AbstractHoodieWriteClient.postCommit(AbstractHoodieWriteClient.java:430)
   	at org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:186)
   	at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:121)
   	at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:479)
   	at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:223)
   	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:145)
   	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:194)
   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:232)
   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
   	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:229)
   	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:190)
   	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:134)
   	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:133)
   	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
   	at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
   	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
   	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
   	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
   	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
   	at us.zoom.op.DmsStreamingIncrement$.saveHudi(DmsStreamingIncrement.scala:278)
   	at us.zoom.op.DmsStreamingIncrement$.$anonfun$main$6(DmsStreamingIncrement.scala:248)
   	at java.util.TreeMap.forEach(TreeMap.java:1005)
   	at us.zoom.op.DmsStreamingIncrement$.main(DmsStreamingIncrement.scala:177)
   	at us.zoom.op.DmsStreamingIncrement.main(DmsStreamingIncrement.scala)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:735)
   22/04/13 03:45:29 ERROR ApplicationMaster: User class threw exception: org.apache.hudi.exception.HoodieCommitException: Failed to archive commits
   org.apache.hudi.exception.HoodieCommitException: Failed to archive commits
   	at org.apache.hudi.table.HoodieTimelineArchiveLog.archive(HoodieTimelineArchiveLog.java:324)
   	at org.apache.hudi.table.HoodieTimelineArchiveLog.archiveIfRequired(HoodieTimelineArchiveLog.java:128)
   	at org.apache.hudi.client.AbstractHoodieWriteClient.postCommit(AbstractHoodieWriteClient.java:430)
   	at org.apache.hudi.client.AbstractHoodieWriteClient.commitStats(AbstractHoodieWriteClient.java:186)
   	at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:121)
   	at org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:479)
   	at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:223)
   	at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:145)
   	at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
   	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:194)
   	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:232)
   	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
   	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:229)
   	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:190)
   	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:134)
   	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:133)
   	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
   	at org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
   	at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   	at org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
   	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
   	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
   	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
   	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
   	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
   	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
   	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
   	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
   	at us.zoom.op.DmsStreamingIncrement$.saveHudi(DmsStreamingIncrement.scala:278)
   	at us.zoom.op.DmsStreamingIncrement$.$anonfun$main$6(DmsStreamingIncrement.scala:248)
   	at java.util.TreeMap.forEach(TreeMap.java:1005)
   	at us.zoom.op.DmsStreamingIncrement$.main(DmsStreamingIncrement.scala:177)
   	at us.zoom.op.DmsStreamingIncrement.main(DmsStreamingIncrement.scala)
   	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   	at java.lang.reflect.Method.invoke(Method.java:498)
   	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:735)
   Caused by: java.io.IOException: Not an Avro data file
   	at org.apache.avro.file.DataFileReader.openReader(DataFileReader.java:50)
   	at org.apache.hudi.common.table.timeline.TimelineMetadataUtils.deserializeAvroMetadata(TimelineMetadataUtils.java:175)
   	at org.apache.hudi.client.utils.MetadataConversionUtils.createMetaWrapper(MetadataConversionUtils.java:84)
   	at org.apache.hudi.table.HoodieTimelineArchiveLog.convertToAvroRecord(HoodieTimelineArchiveLog.java:370)
   	at org.apache.hudi.table.HoodieTimelineArchiveLog.archive(HoodieTimelineArchiveLog.java:311)
   	... 44 more


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] codope commented on issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
codope commented on issue #5290:
URL: https://github.com/apache/hudi/issues/5290#issuecomment-1104012538

   @easonwood Is metadata enabled for the table? Did you try disabling the metadata table?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] yihua closed issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
yihua closed issue #5290: [SUPPORT] Problems in handling column deletions in Hudi
URL: https://github.com/apache/hudi/issues/5290


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [hudi] easonwood commented on issue #5290: [SUPPORT] Problems in handling column deletions in Hudi

Posted by GitBox <gi...@apache.org>.
easonwood commented on issue #5290:
URL: https://github.com/apache/hudi/issues/5290#issuecomment-1096191600

   It seems this error does not influence the result. Data loaded to Hudi successfully. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@hudi.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org