You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by "Jonathan Vexler (Jira)" <ji...@apache.org> on 2022/09/20 18:36:00 UTC

[jira] [Comment Edited] (HUDI-2786) Failed to connect to namenode in Docker Demo on Apple M1 chip

    [ https://issues.apache.org/jira/browse/HUDI-2786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17607356#comment-17607356 ] 

Jonathan Vexler edited comment on HUDI-2786 at 9/20/22 6:35 PM:
----------------------------------------------------------------

Tried to get demo working with m1 by using the images in the 4985 issue. We were able to reproduce the error mentioned and were able to get around it by using the x86 historyserver but stil using all the other ARM docker images. This resulted in an error where org/apache/avro/LogicalType was not found.  Stack trace below:
{code:java}
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See 
http://www.slf4j.org/codes.html#StaticLoggerBinder
for further details.
2022-09-19 17:29:00,808 INFO [main] hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to metastore, current connections: 0
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/avro/LogicalType
at org.apache.hudi.common.table.TableSchemaResolver.convertParquetSchemaToAvro(TableSchemaResolver.java:288)
at org.apache.hudi.common.table.TableSchemaResolver.getTableAvroSchemaFromDataFile(TableSchemaResolver.java:121)
at org.apache.hudi.common.table.TableSchemaResolver.hasOperationField(TableSchemaResolver.java:566)
at org.apache.hudi.util.Lazy.get(Lazy.java:53)
at org.apache.hudi.common.table.TableSchemaResolver.getTableSchemaFromLatestCommitMetadata(TableSchemaResolver.java:225)
at org.apache.hudi.common.table.TableSchemaResolver.getTableAvroSchemaInternal(TableSchemaResolver.java:193)
at org.apache.hudi.common.table.TableSchemaResolver.getTableAvroSchema(TableSchemaResolver.java:142)
at org.apache.hudi.common.table.TableSchemaResolver.getTableParquetSchema(TableSchemaResolver.java:173)
at org.apache.hudi.sync.common.HoodieSyncClient.getStorageSchema(HoodieSyncClient.java:103)
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:206)
at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:153)
at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:141)
at org.apache.hudi.hive.HiveSyncTool.main(HiveSyncTool.java:358)
Caused by: java.lang.ClassNotFoundException: org.apache.avro.LogicalType
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more
  {code}
To see if I could continue working on this m1 issue, I created a branch using the hudi commit 

b49417ff01d3c30e5e60ed3c4449e30b5ddc070e from August 20th and made the changes to the docker/compose/docker-compose_hadoop284_hive233_spark244.yml as well as some small changes to the base pom.xml to update proto and protoc versions which are necessary to build Hudi on m1. This resulted in this error
{code:java}
root@adhoc-2:/opt# spark-submit \
>   --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer $HUDI_UTILITIES_BUNDLE \
>   --table-type COPY_ON_WRITE \
>   --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \
>   --source-ordering-field ts  \
>   --target-base-path /user/hive/warehouse/stock_ticks_cow \
>   --target-table stock_ticks_cow --props /var/demo/config/kafka-source.properties \
>   --schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider
22/09/20 18:24:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/09/20 18:24:07 WARN SchedulerConfGenerator: Job Scheduling Configs will not be in effect as spark.scheduler.mode is not set to FAIR at instantiation time. Continuing without scheduling configs
22/09/20 18:24:09 WARN DFSPropertiesConfiguration: Cannot find HUDI_CONF_DIR, please set it as the dir of hudi-defaults.conf
22/09/20 18:24:09 WARN DFSPropertiesConfiguration: Properties file file:/etc/hudi/conf/hudi-defaults.conf not found. Ignoring to load props file
22/09/20 18:24:09 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect.
22/09/20 18:24:20 WARN KafkaUtils: overriding enable.auto.commit to false for executor
22/09/20 18:24:20 WARN KafkaUtils: overriding auto.offset.reset to none for executor
22/09/20 18:24:20 ERROR KafkaUtils: group.id is null, you should probably set it
22/09/20 18:24:20 WARN KafkaUtils: overriding executor group.id to spark-executor-null
22/09/20 18:24:20 WARN KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
22/09/20 18:24:24 WARN HoodieBackedTableMetadata: Metadata table was not found at path /user/hive/warehouse/stock_ticks_cow/.hoodie/metadata
00:07  WARN: Timeline-server-based markers are not supported for HDFS: base path /user/hive/warehouse/stock_ticks_cow.  Falling back to direct markers.
00:08  WARN: Timeline-server-based markers are not supported for HDFS: base path /user/hive/warehouse/stock_ticks_cow.  Falling back to direct markers.
00:09  WARN: Timeline-server-based markers are not supported for HDFS: base path /user/hive/warehouse/stock_ticks_cow.  Falling back to direct markers.
22/09/20 18:24:30 ERROR Javalin: Exception occurred while servicing http-request
java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni1786657336445888001.so: /tmp/librocksdbjni1786657336445888001.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64-bit .so on a AARCH64-bit platform)
 at java.lang.ClassLoader$NativeLibrary.load(Native Method)
 at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
 at java.lang.Runtime.load0(Runtime.java:809)
 at java.lang.System.load(System.java:1086)
 at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
 at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
 at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
 at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
 at org.rocksdb.DBOptions.<clinit>(DBOptions.java:21)
 at org.apache.hudi.common.util.collection.RocksDBDAO.init(RocksDBDAO.java:97)
 at org.apache.hudi.common.util.collection.RocksDBDAO.<init>(RocksDBDAO.java:73)
 at org.apache.hudi.common.table.view.RocksDbBasedFileSystemView.<init>(RocksDbBasedFileSystemView.java:78)
 at org.apache.hudi.common.table.view.FileSystemViewManager.createRocksDBBasedFileSystemView(FileSystemViewManager.java:141)
 at org.apache.hudi.common.table.view.FileSystemViewManager.lambda$createViewManager$367915d8$1(FileSystemViewManager.java:251)
 at org.apache.hudi.common.table.view.FileSystemViewManager.lambda$getFileSystemView$0(FileSystemViewManager.java:103)
 at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
 at org.apache.hudi.common.table.view.FileSystemViewManager.getFileSystemView(FileSystemViewManager.java:101)
 at org.apache.hudi.timeline.service.RequestHandler.isLocalViewBehind(RequestHandler.java:125)
 at org.apache.hudi.timeline.service.RequestHandler.syncIfLocalViewBehind(RequestHandler.java:150)
 at org.apache.hudi.timeline.service.RequestHandler.access$100(RequestHandler.java:66)
 at org.apache.hudi.timeline.service.RequestHandler$ViewHandler.handle(RequestHandler.java:492)
 at io.javalin.security.SecurityUtil.noopAccessManager(SecurityUtil.kt:22)
 at io.javalin.Javalin.lambda$addHandler$0(Javalin.java:606)
 at io.javalin.core.JavalinServlet$service$2$1.invoke(JavalinServlet.kt:46)
 at io.javalin.core.JavalinServlet$service$2$1.invoke(JavalinServlet.kt:17)
 at io.javalin.core.JavalinServlet$service$1.invoke(JavalinServlet.kt:143)
 at io.javalin.core.JavalinServlet$service$2.invoke(JavalinServlet.kt:41)
 at io.javalin.core.JavalinServlet.service(JavalinServlet.kt:107)
 at io.javalin.core.util.JettyServerUtil$initialize$httpHandler$1.doHandle(JettyServerUtil.kt:72)
 at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
 at org.apache.hudi.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
 at org.apache.hudi.org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1668)
 at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
 at org.apache.hudi.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
 at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
 at org.apache.hudi.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:61)
 at org.apache.hudi.org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:174)
 at org.apache.hudi.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
 at org.apache.hudi.org.eclipse.jetty.server.Server.handle(Server.java:502)
 at org.apache.hudi.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
 at org.apache.hudi.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
 at org.apache.hudi.org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
 at org.apache.hudi.org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
 at org.apache.hudi.org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
 at org.apache.hudi.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at org.apache.hudi.org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
 at java.lang.Thread.run(Thread.java:748)
22/09/20 18:24:30 ERROR PriorityBasedFileSystemView: Got error running preferred function. Trying secondary
org.apache.hudi.exception.HoodieRemoteException: Server Error
 at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.getPendingCompactionOperations(RemoteHoodieTableFileSystemView.java:438)
 at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.execute(PriorityBasedFileSystemView.java:68)
 at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.getPendingCompactionOperations(PriorityBasedFileSystemView.java:224)
 at org.apache.hudi.table.action.clean.CleanPlanner.<init>(CleanPlanner.java:96)
 at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:97)
 at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:141)
 at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.execute(CleanPlanActionExecutor.java:166)
 at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.scheduleCleaning(HoodieSparkCopyOnWriteTable.java:204)
 at org.apache.hudi.client.BaseHoodieWriteClient.scheduleTableServiceInternal(BaseHoodieWriteClient.java:1358)
 at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:870)
 at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:843)
 at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:897)
 at org.apache.hudi.client.BaseHoodieWriteClient.autoCleanOnCommit(BaseHoodieWriteClient.java:618)
 at org.apache.hudi.client.BaseHoodieWriteClient.postCommit(BaseHoodieWriteClient.java:537)
 at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:238)
 at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:122)
 at org.apache.hudi.utilities.deltastreamer.DeltaSync.writeToSink(DeltaSync.java:625)
 at org.apache.hudi.utilities.deltastreamer.DeltaSync.syncOnce(DeltaSync.java:336)
 at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.lambda$sync$2(HoodieDeltaStreamer.java:201)
 at org.apache.hudi.common.util.Option.ifPresent(Option.java:97)
 at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.sync(HoodieDeltaStreamer.java:199)
 at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:562)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
 at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
 at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
 at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
 at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
 at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.http.client.HttpResponseException: Server Error
 at org.apache.http.impl.client.AbstractResponseHandler.handleResponse(AbstractResponseHandler.java:70)
 at org.apache.http.client.fluent.Response.handleResponse(Response.java:90)
 at org.apache.http.client.fluent.Response.returnContent(Response.java:97)
 at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.executeRequest(RemoteHoodieTableFileSystemView.java:185)
 at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.getPendingCompactionOperations(RemoteHoodieTableFileSystemView.java:434)
 ... 33 more {code}


was (Author: JIRAUSER295101):
Tried to get demo working with m1 by using the images in the 4985 issue. We were able to reproduce the error mentioned and were able to get around it by using the x86 historyserver but stil using all the other ARM docker images. This resulted in an error where org/apache/avro/LogicalType was not found.  Stack trace below:
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See 
[http://www.slf4j.org/codes.html#StaticLoggerBinder]
 for further details.
2022-09-19 17:29:00,808 INFO  [main] hive.metastore (HiveMetaStoreClient.java:close(564)) - Closed a connection to metastore, current connections: 0
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/avro/LogicalType
	at org.apache.hudi.common.table.TableSchemaResolver.convertParquetSchemaToAvro(TableSchemaResolver.java:288)
	at org.apache.hudi.common.table.TableSchemaResolver.getTableAvroSchemaFromDataFile(TableSchemaResolver.java:121)
	at org.apache.hudi.common.table.TableSchemaResolver.hasOperationField(TableSchemaResolver.java:566)
	at org.apache.hudi.util.Lazy.get(Lazy.java:53)
	at org.apache.hudi.common.table.TableSchemaResolver.getTableSchemaFromLatestCommitMetadata(TableSchemaResolver.java:225)
	at org.apache.hudi.common.table.TableSchemaResolver.getTableAvroSchemaInternal(TableSchemaResolver.java:193)
	at org.apache.hudi.common.table.TableSchemaResolver.getTableAvroSchema(TableSchemaResolver.java:142)
	at org.apache.hudi.common.table.TableSchemaResolver.getTableParquetSchema(TableSchemaResolver.java:173)
	at org.apache.hudi.sync.common.HoodieSyncClient.getStorageSchema(HoodieSyncClient.java:103)
	at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:206)
	at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:153)
	at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:141)
	at org.apache.hudi.hive.HiveSyncTool.main(HiveSyncTool.java:358)
Caused by: java.lang.ClassNotFoundException: org.apache.avro.LogicalType
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 13 more
 

To see if I could continue working on this m1 issue, I created a branch using the hudi commit 

b49417ff01d3c30e5e60ed3c4449e30b5ddc070e from August 20th and made the changes to the docker/compose/docker-compose_hadoop284_hive233_spark244.yml as well as some small changes to the base pom.xml to update proto and protoc versions which are necessary to build Hudi on m1. This resulted in this error
{code:java}
root@adhoc-2:/opt# spark-submit \
>   --class org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer $HUDI_UTILITIES_BUNDLE \
>   --table-type COPY_ON_WRITE \
>   --source-class org.apache.hudi.utilities.sources.JsonKafkaSource \
>   --source-ordering-field ts  \
>   --target-base-path /user/hive/warehouse/stock_ticks_cow \
>   --target-table stock_ticks_cow --props /var/demo/config/kafka-source.properties \
>   --schemaprovider-class org.apache.hudi.utilities.schema.FilebasedSchemaProvider
22/09/20 18:24:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/09/20 18:24:07 WARN SchedulerConfGenerator: Job Scheduling Configs will not be in effect as spark.scheduler.mode is not set to FAIR at instantiation time. Continuing without scheduling configs
22/09/20 18:24:09 WARN DFSPropertiesConfiguration: Cannot find HUDI_CONF_DIR, please set it as the dir of hudi-defaults.conf
22/09/20 18:24:09 WARN DFSPropertiesConfiguration: Properties file file:/etc/hudi/conf/hudi-defaults.conf not found. Ignoring to load props file
22/09/20 18:24:09 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect.
22/09/20 18:24:20 WARN KafkaUtils: overriding enable.auto.commit to false for executor
22/09/20 18:24:20 WARN KafkaUtils: overriding auto.offset.reset to none for executor
22/09/20 18:24:20 ERROR KafkaUtils: group.id is null, you should probably set it
22/09/20 18:24:20 WARN KafkaUtils: overriding executor group.id to spark-executor-null
22/09/20 18:24:20 WARN KafkaUtils: overriding receive.buffer.bytes to 65536 see KAFKA-3135
22/09/20 18:24:24 WARN HoodieBackedTableMetadata: Metadata table was not found at path /user/hive/warehouse/stock_ticks_cow/.hoodie/metadata
00:07  WARN: Timeline-server-based markers are not supported for HDFS: base path /user/hive/warehouse/stock_ticks_cow.  Falling back to direct markers.
00:08  WARN: Timeline-server-based markers are not supported for HDFS: base path /user/hive/warehouse/stock_ticks_cow.  Falling back to direct markers.
00:09  WARN: Timeline-server-based markers are not supported for HDFS: base path /user/hive/warehouse/stock_ticks_cow.  Falling back to direct markers.
22/09/20 18:24:30 ERROR Javalin: Exception occurred while servicing http-request
java.lang.UnsatisfiedLinkError: /tmp/librocksdbjni1786657336445888001.so: /tmp/librocksdbjni1786657336445888001.so: cannot open shared object file: No such file or directory (Possible cause: can't load AMD 64-bit .so on a AARCH64-bit platform)
 at java.lang.ClassLoader$NativeLibrary.load(Native Method)
 at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
 at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
 at java.lang.Runtime.load0(Runtime.java:809)
 at java.lang.System.load(System.java:1086)
 at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
 at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
 at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
 at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
 at org.rocksdb.DBOptions.<clinit>(DBOptions.java:21)
 at org.apache.hudi.common.util.collection.RocksDBDAO.init(RocksDBDAO.java:97)
 at org.apache.hudi.common.util.collection.RocksDBDAO.<init>(RocksDBDAO.java:73)
 at org.apache.hudi.common.table.view.RocksDbBasedFileSystemView.<init>(RocksDbBasedFileSystemView.java:78)
 at org.apache.hudi.common.table.view.FileSystemViewManager.createRocksDBBasedFileSystemView(FileSystemViewManager.java:141)
 at org.apache.hudi.common.table.view.FileSystemViewManager.lambda$createViewManager$367915d8$1(FileSystemViewManager.java:251)
 at org.apache.hudi.common.table.view.FileSystemViewManager.lambda$getFileSystemView$0(FileSystemViewManager.java:103)
 at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
 at org.apache.hudi.common.table.view.FileSystemViewManager.getFileSystemView(FileSystemViewManager.java:101)
 at org.apache.hudi.timeline.service.RequestHandler.isLocalViewBehind(RequestHandler.java:125)
 at org.apache.hudi.timeline.service.RequestHandler.syncIfLocalViewBehind(RequestHandler.java:150)
 at org.apache.hudi.timeline.service.RequestHandler.access$100(RequestHandler.java:66)
 at org.apache.hudi.timeline.service.RequestHandler$ViewHandler.handle(RequestHandler.java:492)
 at io.javalin.security.SecurityUtil.noopAccessManager(SecurityUtil.kt:22)
 at io.javalin.Javalin.lambda$addHandler$0(Javalin.java:606)
 at io.javalin.core.JavalinServlet$service$2$1.invoke(JavalinServlet.kt:46)
 at io.javalin.core.JavalinServlet$service$2$1.invoke(JavalinServlet.kt:17)
 at io.javalin.core.JavalinServlet$service$1.invoke(JavalinServlet.kt:143)
 at io.javalin.core.JavalinServlet$service$2.invoke(JavalinServlet.kt:41)
 at io.javalin.core.JavalinServlet.service(JavalinServlet.kt:107)
 at io.javalin.core.util.JettyServerUtil$initialize$httpHandler$1.doHandle(JettyServerUtil.kt:72)
 at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
 at org.apache.hudi.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
 at org.apache.hudi.org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1668)
 at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
 at org.apache.hudi.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
 at org.apache.hudi.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
 at org.apache.hudi.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:61)
 at org.apache.hudi.org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:174)
 at org.apache.hudi.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
 at org.apache.hudi.org.eclipse.jetty.server.Server.handle(Server.java:502)
 at org.apache.hudi.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)
 at org.apache.hudi.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)
 at org.apache.hudi.org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
 at org.apache.hudi.org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
 at org.apache.hudi.org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
 at org.apache.hudi.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
 at org.apache.hudi.org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
 at java.lang.Thread.run(Thread.java:748)
22/09/20 18:24:30 ERROR PriorityBasedFileSystemView: Got error running preferred function. Trying secondary
org.apache.hudi.exception.HoodieRemoteException: Server Error
 at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.getPendingCompactionOperations(RemoteHoodieTableFileSystemView.java:438)
 at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.execute(PriorityBasedFileSystemView.java:68)
 at org.apache.hudi.common.table.view.PriorityBasedFileSystemView.getPendingCompactionOperations(PriorityBasedFileSystemView.java:224)
 at org.apache.hudi.table.action.clean.CleanPlanner.<init>(CleanPlanner.java:96)
 at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:97)
 at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:141)
 at org.apache.hudi.table.action.clean.CleanPlanActionExecutor.execute(CleanPlanActionExecutor.java:166)
 at org.apache.hudi.table.HoodieSparkCopyOnWriteTable.scheduleCleaning(HoodieSparkCopyOnWriteTable.java:204)
 at org.apache.hudi.client.BaseHoodieWriteClient.scheduleTableServiceInternal(BaseHoodieWriteClient.java:1358)
 at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:870)
 at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:843)
 at org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:897)
 at org.apache.hudi.client.BaseHoodieWriteClient.autoCleanOnCommit(BaseHoodieWriteClient.java:618)
 at org.apache.hudi.client.BaseHoodieWriteClient.postCommit(BaseHoodieWriteClient.java:537)
 at org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:238)
 at org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:122)
 at org.apache.hudi.utilities.deltastreamer.DeltaSync.writeToSink(DeltaSync.java:625)
 at org.apache.hudi.utilities.deltastreamer.DeltaSync.syncOnce(DeltaSync.java:336)
 at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.lambda$sync$2(HoodieDeltaStreamer.java:201)
 at org.apache.hudi.common.util.Option.ifPresent(Option.java:97)
 at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.sync(HoodieDeltaStreamer.java:199)
 at org.apache.hudi.utilities.deltastreamer.HoodieDeltaStreamer.main(HoodieDeltaStreamer.java:562)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
 at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
 at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
 at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
 at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
 at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.http.client.HttpResponseException: Server Error
 at org.apache.http.impl.client.AbstractResponseHandler.handleResponse(AbstractResponseHandler.java:70)
 at org.apache.http.client.fluent.Response.handleResponse(Response.java:90)
 at org.apache.http.client.fluent.Response.returnContent(Response.java:97)
 at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.executeRequest(RemoteHoodieTableFileSystemView.java:185)
 at org.apache.hudi.common.table.view.RemoteHoodieTableFileSystemView.getPendingCompactionOperations(RemoteHoodieTableFileSystemView.java:434)
 ... 33 more {code}

> Failed to connect to namenode in Docker Demo on Apple M1 chip
> -------------------------------------------------------------
>
>                 Key: HUDI-2786
>                 URL: https://issues.apache.org/jira/browse/HUDI-2786
>             Project: Apache Hudi
>          Issue Type: Bug
>          Components: dependencies, dev-experience
>            Reporter: Ethan Guo
>            Assignee: Jonathan Vexler
>            Priority: Blocker
>             Fix For: 0.13.0
>
>
> {code:java}
> > ./setup_demo.sh 
> [+] Running 1/0
>  ⠿ compose  Warning: No resource found to remove                                                                                                                                                        0.0s
> [+] Running 15/15
>  ⠿ namenode Pulled                                                                                                                                                                                      1.4s
>  ⠿ kafka Pulled                                                                                                                                                                                         1.3s
>  ⠿ presto-worker-1 Pulled                                                                                                                                                                               1.3s
>  ⠿ historyserver Pulled                                                                                                                                                                                 1.4s
>  ⠿ adhoc-2 Pulled                                                                                                                                                                                       1.3s
>  ⠿ adhoc-1 Pulled                                                                                                                                                                                       1.4s
>  ⠿ graphite Pulled                                                                                                                                                                                      1.3s
>  ⠿ sparkmaster Pulled                                                                                                                                                                                   1.3s
>  ⠿ hive-metastore-postgresql Pulled                                                                                                                                                                     1.3s
>  ⠿ presto-coordinator-1 Pulled                                                                                                                                                                          1.3s
>  ⠿ spark-worker-1 Pulled                                                                                                                                                                                1.4s
>  ⠿ hiveserver Pulled                                                                                                                                                                                    1.3s
>  ⠿ hivemetastore Pulled                                                                                                                                                                                 1.4s
>  ⠿ zookeeper Pulled                                                                                                                                                                                     1.3s
>  ⠿ datanode1 Pulled                                                                                                                                                                                     1.3s
> [+] Running 16/16
>  ⠿ Network compose_default              Created                                                                                                                                                         0.0s
>  ⠿ Container hive-metastore-postgresql  Started                                                                                                                                                         1.1s
>  ⠿ Container kafkabroker                Started                                                                                                                                                         1.1s
>  ⠿ Container zookeeper                  Started                                                                                                                                                         1.1s
>  ⠿ Container namenode                   Started                                                                                                                                                         1.3s
>  ⠿ Container graphite                   Started                                                                                                                                                         1.2s
>  ⠿ Container historyserver              Started                                                                                                                                                         2.2s
>  ⠿ Container hivemetastore              Started                                                                                                                                                         2.2s
>  ⠿ Container datanode1                  Started                                                                                                                                                         3.3s
>  ⠿ Container presto-coordinator-1       Started                                                                                                                                                         2.7s
>  ⠿ Container hiveserver                 Started                                                                                                                                                         3.2s
>  ⠿ Container presto-worker-1            Started                                                                                                                                                         4.2s
>  ⠿ Container sparkmaster                Started                                                                                                                                                         3.5s
>  ⠿ Container adhoc-2                    Started                                                                                                                                                         4.7s
>  ⠿ Container adhoc-1                    Started                                                                                                                                                         4.8s
>  ⠿ Container spark-worker-1             Started                                                                                                                                                         4.8s
> Copying spark default config and setting up configs
> 21/11/18 01:16:19 WARN ipc.Client: Failed to connect to server: namenode/172.19.0.6:8020: try once and fail.
> java.net.ConnectException: Connection refused
> 	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> 	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> 	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> 	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
> 	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685)
> 	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788)
> 	at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410)
> 	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1550)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1381)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:1345)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> 	at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:498)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346)
> 	at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
> 	at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1649)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1440)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1437)
> 	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1437)
> 	at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:64)
> 	at org.apache.hadoop.fs.Globber.doGlob(Globber.java:269)
> 	at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
> 	at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1686)
> 	at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)
> 	at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
> 	at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
> 	at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
> 	at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
> 	at org.apache.hadoop.fs.FsShell.run(FsShell.java:317)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> 	at org.apache.hadoop.fs.FsShell.main(FsShell.java:380)
> mkdir: Call From adhoc-1/172.19.0.13 to namenode:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> copyFromLocal: `/var/demo/.': No such file or directory: `hdfs://namenode:8020/var/demo'
> Copying spark default config and setting up configs {code}
> Env: MacBook with M1 chip, macOS 12.0.1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)