You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2022/09/30 08:14:39 UTC

[GitHub] [beam] mosche opened a new issue, #23440: [Bug]: PortableRunner fails with connectivity issues if using Spark + Docker on a Mac

mosche opened a new issue, #23440:
URL: https://github.com/apache/beam/issues/23440

   ### What happened?
   
   The reason is that the entire stack expects to connect to components on `localhost`.
   Running Docker on Mac, there's no support for `host` networking. However, this would be required to get things working.
   
   Consider this job:
   ```
   python /tmp/app/__main__.py \
         --runner=PortableRunner \
         --job_endpoint=beam-job-server:8099 \
         --artifact_endpoint=beam-job-server:8098 \
         --environment_type=EXTERNAL \
         --environment_config=beam-python-workers:50000
   ```
   
   The Spark job-server was started with `--spark-master-url=spark://spark:7077 --job-host=beam-job-server`.
   
   I was able to fix connectivity between the job-server and the Spark cluster by also setting  `-Dspark.driver.host=beam-job-server` when starting the job-server.
   
   Unfortunately connectivity is still broken for the Python SDK worker pool, started using `--worker_pool --provision_endpoint=beam-job-server:8099 --artifact_endpoint=beam-job-server:8098`.
   
   When finally receiving a work request, a worker is started as follows:
   ```
   2022-09-29T10:20:56.811855690Z Starting worker with command ['/opt/apache/beam/boot', '--id=1-1', '--logging_endpoint=localhost:42343', '--artifact_endpoint=localhost:34219', '--provision_endpoint=localhost:40675', '--control_endpoint=localhost:46485']
   ```
   
   This way, obviously, the worker isn't able to communicate results back.
   
   
   ### Issue Priority
   
   Priority: 2
   
   ### Issue Component
   
   Component: runner-spark


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] mosche closed issue #23440: [Bug]: PortableRunner fails with connectivity issues if using Spark + Docker on a Mac

Posted by GitBox <gi...@apache.org>.
mosche closed issue #23440: [Bug]: PortableRunner fails with connectivity issues if using Spark + Docker on a Mac
URL: https://github.com/apache/beam/issues/23440


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] mosche commented on issue #23440: [Bug]: PortableRunner fails with connectivity issues if using Spark + Docker on a Mac

Posted by GitBox <gi...@apache.org>.
mosche commented on issue #23440:
URL: https://github.com/apache/beam/issues/23440#issuecomment-1271567347

   The solution is to set these two environment variables for the Spark workers:
   
   ```shell
   # By default Beam expects workers (of the SDK worker pool) to connect to a Spark worker on `localhost`. When running
   # the worker pool in docker on a Mac this isn't possible due to the lack of `host` networking. Using
   # BEAM_WORKER_POOL_IN_DOCKER_VM=1, Beam will use `host.docker.internal` to communicate via the docker host instead.
   export BEAM_WORKER_POOL_IN_DOCKER_VM=1
   
   # DOCKER_MAC_CONTAINER=1 limits the ports on a Spark worker for communication with SDK workers to the range 8100 - 8200
   # instead of using random ports. Ports of the range are used in a round-robin fashion and have to be published.
   export DOCKER_MAC_CONTAINER=1
   ```
   
   See https://github.com/mosche/beam-portable-spark for a working example.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] adekunleoajayi commented on issue #23440: [Bug]: PortableRunner fails with connectivity issues if using Spark + Docker on a Mac

Posted by GitBox <gi...@apache.org>.
adekunleoajayi commented on issue #23440:
URL: https://github.com/apache/beam/issues/23440#issuecomment-1269603187

   I have a similar problem with the portable runner mac. Take this example
   
   ```
   python -m apache_beam.examples.wordcount --input ./input/kinglear.txt \
                                            --output ./output_spark_portable_runner/counts \
                                            --runner PortableRunner \
                                            --job_endpoint localhost:8099 \
                                            --environment_type LOOPBACK
   ```
   
   ### 1 - Using `--net=host`
   
   When I start the portable runner with 
   
   `docker run --platform linux/amd64 --net=host apache/beam_spark_job_server:latest`
   
   The server starts successfully 
   
   ```
   22/10/06 07:48:44 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: ArtifactStagingService started on localhost:8098
   22/10/06 07:48:45 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: Java ExpansionService started on localhost:8097
   22/10/06 07:48:45 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: JobService started on localhost:8099
   22/10/06 07:48:45 INFO org.apache.beam.runners.jobsubmission.JobServerDriver: Job server now running, terminate with Ctrl+C
   ```
   
   but the ports are not accessible because `--net=host` is not supported on mac, hence the pipeline fails.
   
   ```
   INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:59724
   WARNING:root:Make sure that locally built Python SDK docker image has Python 3.9 interpreter.
   INFO:root:Default Python SDK image for environment is apache/beam_python3.9_sdk:2.30.0
   INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function lift_combiners at 0x7fea81d46040> ====================
   INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function sort_stages at 0x7fea81d46790> ====================
   Traceback (most recent call last):
     File "/Users/aajayi/opt/anaconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
       return _run_code(code, main_globals, None,
     File "/Users/aajayi/opt/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code
       exec(code, run_globals)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/examples/wordcount.py", line 94, in <module>
       run()
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/examples/wordcount.py", line 89, in run
       output | 'Write' >> WriteToText(known_args.output)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/pipeline.py", line 585, in __exit__
       self.result = self.run()
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/pipeline.py", line 564, in run
       return self.runner.run_pipeline(self, self._options)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/runners/portability/portable_runner.py", line 437, in run_pipeline
       job_service_handle = self.create_job_service(options)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/runners/portability/portable_runner.py", line 317, in create_job_service
       return self.create_job_service_handle(server.start(), options)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/runners/portability/job_server.py", line 54, in start
       grpc.channel_ready_future(channel).result(timeout=self._timeout)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/grpc/_utilities.py", line 139, in result
       self._block(timeout)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/grpc/_utilities.py", line 85, in _block
       raise grpc.FutureTimeoutError()
   grpc.FutureTimeoutError
   
   ```
   
   ### 2 - Published ports
   
   Instead, I tried to explicitly publish the ports when starting the portable runner
   
   `docker run --platform linux/amd64 -p 8099:8099 -p 8098:8098 -p 8097:8097 apache/beam_spark_job_server:latest`
   
   This way, the ports are accessible. This time It starts then fails with the following error
   
   ```
   INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:56571
   WARNING:root:Make sure that locally built Python SDK docker image has Python 3.9 interpreter.
   INFO:root:Default Python SDK image for environment is apache/beam_python3.9_sdk:2.30.0
   INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function lift_combiners at 0x7fa9b841e040> ====================
   INFO:apache_beam.runners.portability.fn_api_runner.translations:==================== <function sort_stages at 0x7fa9b841e790> ====================
   INFO:apache_beam.runners.portability.portable_runner:Environment "LOOPBACK" has started a component necessary for the execution. Be sure to run the pipeline using
     with Pipeline() as p:
       p.apply(..)
   This ensures that the pipeline finishes before this program exits.
   INFO:apache_beam.runners.portability.portable_runner:Job state changed to STOPPED
   INFO:apache_beam.runners.portability.portable_runner:Job state changed to STARTING
   INFO:apache_beam.runners.portability.portable_runner:Job state changed to RUNNING
   ERROR:root:org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 1 times, most recent failure: Lost task 3.0 in stage 0.0 (TID 3, localhost, executor driver): org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:451)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:436)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303)
           at org.apache.beam.runners.fnexecution.control.DefaultExecutableStageContext.getStageBundleFactory(DefaultExecutableStageContext.java:38)
           at org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory$WrappedContext.getStageBundleFactory(ReferenceCountingExecutableStageContextFactory.java:202)
           at org.apache.beam.runners.spark.translation.SparkExecutableStageFunction.call(SparkExecutableStageFunction.java:142)
           at org.apache.beam.runners.spark.translation.SparkExecutableStageFunction.call(SparkExecutableStageFunction.java:81)
           at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
           at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
           at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:823)
           at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:823)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:359)
           at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:357)
           at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1165)
           at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
           at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
           at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
           at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
           at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:357)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:308)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:359)
           at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:357)
           at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1165)
           at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
           at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
           at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
           at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
           at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:357)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:308)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:123)
           at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:411)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:417)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:750)
   Caused by: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
           at org.apache.beam.vendor.grpc.v1p43p2.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
           at org.apache.beam.vendor.grpc.v1p43p2.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
           at org.apache.beam.vendor.grpc.v1p43p2.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
           at org.apache.beam.model.fnexecution.v1.BeamFnExternalWorkerPoolGrpc$BeamFnExternalWorkerPoolBlockingStub.startWorker(BeamFnExternalWorkerPoolGrpc.java:225)
           at org.apache.beam.runners.fnexecution.environment.ExternalEnvironmentFactory.createEnvironment(ExternalEnvironmentFactory.java:113)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:252)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:231)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
           ... 54 more
   Caused by: org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:56571
   Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.unix.Socket.finishConnect(Socket.java:278)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:470)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
           at java.lang.Thread.run(Thread.java:750)
   
   Driver stacktrace:
   INFO:apache_beam.runners.portability.portable_runner:Job state changed to FAILED
   Traceback (most recent call last):
     File "/Users/aajayi/opt/anaconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
       return _run_code(code, main_globals, None,
     File "/Users/aajayi/opt/anaconda3/lib/python3.9/runpy.py", line 87, in _run_code
       exec(code, run_globals)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/examples/wordcount.py", line 94, in <module>
       run()
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/examples/wordcount.py", line 89, in run
       output | 'Write' >> WriteToText(known_args.output)
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/pipeline.py", line 586, in __exit__
       self.result.wait_until_finish()
     File "/Users/aajayi/Documents/Projects/BeamPython/venv/lib/python3.9/site-packages/apache_beam/runners/portability/portable_runner.py", line 599, in wait_until_finish
       raise self._runtime_exception
   RuntimeError: Pipeline BeamApp-root-1006075554-73ce662f_3b12d9a3-1a00-42ad-be20-5796df9aeefd failed in state FAILED: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 0.0 failed 1 times, most recent failure: Lost task 3.0 in stage 0.0 (TID 3, localhost, executor driver): org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3974)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4958)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4964)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:451)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory.<init>(DefaultJobBundleFactory.java:436)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory.forStage(DefaultJobBundleFactory.java:303)
           at org.apache.beam.runners.fnexecution.control.DefaultExecutableStageContext.getStageBundleFactory(DefaultExecutableStageContext.java:38)
           at org.apache.beam.runners.fnexecution.control.ReferenceCountingExecutableStageContextFactory$WrappedContext.getStageBundleFactory(ReferenceCountingExecutableStageContextFactory.java:202)
           at org.apache.beam.runners.spark.translation.SparkExecutableStageFunction.call(SparkExecutableStageFunction.java:142)
           at org.apache.beam.runners.spark.translation.SparkExecutableStageFunction.call(SparkExecutableStageFunction.java:81)
           at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
           at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$4$1.apply(JavaRDDLike.scala:153)
           at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:823)
           at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:823)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:359)
           at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:357)
           at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1165)
           at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
           at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
           at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
           at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
           at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:357)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:308)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:359)
           at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:357)
           at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1165)
           at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
           at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
           at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
           at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
           at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:357)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:308)
           at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
           at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
           at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:123)
           at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:411)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:417)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:750)
   Caused by: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
           at org.apache.beam.vendor.grpc.v1p43p2.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:262)
           at org.apache.beam.vendor.grpc.v1p43p2.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:243)
           at org.apache.beam.vendor.grpc.v1p43p2.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:156)
           at org.apache.beam.model.fnexecution.v1.BeamFnExternalWorkerPoolGrpc$BeamFnExternalWorkerPoolBlockingStub.startWorker(BeamFnExternalWorkerPoolGrpc.java:225)
           at org.apache.beam.runners.fnexecution.environment.ExternalEnvironmentFactory.createEnvironment(ExternalEnvironmentFactory.java:113)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:252)
           at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$1.load(DefaultJobBundleFactory.java:231)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3528)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2277)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
           at org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
           ... 54 more
   Caused by: org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:56571
   Caused by: java.net.ConnectException: finishConnect(..) failed: Connection refused
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.unix.Errors.newConnectException0(Errors.java:155)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.unix.Errors.handleConnectErrno(Errors.java:128)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.unix.Socket.finishConnect(Socket.java:278)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.doFinishConnect(AbstractEpollChannel.java:710)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:687)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:567)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:470)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
           at org.apache.beam.vendor.grpc.v1p43p2.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
           at java.lang.Thread.run(Thread.java:750)
   
   Driver stacktrace:
   (
   ```
   
   Apparently, at inception,  the Job Server also generates a worker with a dynamic port. 
   
   `INFO:apache_beam.runners.worker.worker_pool_main:Listening for workers at localhost:56571`
   
   This worker can't be reached and the pipeline fails
   
   `Caused by: org.apache.beam.vendor.grpc.v1p43p2.io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Connection refused: localhost/127.0.0.1:56571`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] mosche commented on issue #23440: [Bug]: PortableRunner fails with connectivity issues if using Spark + Docker on a Mac

Posted by GitBox <gi...@apache.org>.
mosche commented on issue #23440:
URL: https://github.com/apache/beam/issues/23440#issuecomment-1308567646

   This requires better documentation (#23438)!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org