You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Tobias (JIRA)" <ji...@apache.org> on 2015/04/10 10:18:13 UTC

[jira] [Commented] (SPARK-6388) Spark 1.3 + Hadoop 2.6 Can't work on Java 8_40

    [ https://issues.apache.org/jira/browse/SPARK-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14489152#comment-14489152 ] 

Tobias commented on SPARK-6388:
-------------------------------

Hi everyone,
I have a similar setup as the thread starter and the same problem. 
I am using the oracle jdks (version 7 (1.7.0_76) and 8 (1.8.0_40)). If I use 8 I get the reported disassociation error every time I try to submit a job or just try to bring up the spark-shell.

I am using Yarn in my setup, with Java 8 I would get this error every time I try to start a job. If I don't use Yarn in Spark it works with Java 8. 
I could run all Hadoop examples with Java8 and Yarn, thus this seems to me to be a Yarn+Spark@Java-8 triangle problem?

I suggest a reconsideration to acknowledge this as a bug? If you need more information just ask :)

> Spark 1.3 + Hadoop 2.6 Can't work on Java 8_40
> ----------------------------------------------
>
>                 Key: SPARK-6388
>                 URL: https://issues.apache.org/jira/browse/SPARK-6388
>             Project: Spark
>          Issue Type: Bug
>          Components: Block Manager, Spark Submit, YARN
>    Affects Versions: 1.3.0
>         Environment: 1. Linux version 3.16.0-30-generic (buildd@komainu) (gcc version 4.9.1 (Ubuntu 4.9.1-16ubuntu6) ) #40-Ubuntu SMP Mon Jan 12 22:06:37 UTC 2015
> 2. Oracle Java 8 update 40  for Linux X64
> 3. Scala 2.10.5
> 4. Hadoop 2.6 (pre-build version)
>            Reporter: John
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> I build Apache Spark 1.3 munally.
> ---------------------------
> JAVA_HOME=PATH_TO_JAVA8
> mvn clean package -Pyarn -Phadoop-2.4 -Dhadoop.version=2.6.0 -DskipTests
> ---------------------------
> Something goes wrong, akka always tell me 
> ---------------------------
> 15/03/17 21:28:10 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@Server2:42161] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
> ---------------------------
> I build another version of Spark 1.3 + Hadoop 2.6 under Java 7.
> Everything goes well.
> Logs
> ---------------------------
> 15/03/17 21:27:06 INFO spark.SparkContext: Running Spark version 1.3.0
> 15/03/17 21:27:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 15/03/17 21:27:08 INFO spark.SecurityManager: Changing view Servers to: hduser
> 15/03/17 21:27:08 INFO spark.SecurityManager: Changing modify Servers to: hduser
> 15/03/17 21:27:08 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui Servers disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
> 15/03/17 21:27:08 INFO slf4j.Slf4jLogger: Slf4jLogger started
> 15/03/17 21:27:08 INFO Remoting: Starting remoting
> 15/03/17 21:27:09 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@Server3:37951]
> 15/03/17 21:27:09 INFO util.Utils: Successfully started service 'sparkDriver' on port 37951.
> 15/03/17 21:27:09 INFO spark.SparkEnv: Registering MapOutputTracker
> 15/03/17 21:27:09 INFO spark.SparkEnv: Registering BlockManagerMaster
> 15/03/17 21:27:09 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-0db692bb-cd02-40c8-a8f0-3813c6da18e2/blockmgr-a1d0ad23-ab76-4177-80a0-a6f982a64d80
> 15/03/17 21:27:09 INFO storage.MemoryStore: MemoryStore started with capacity 265.1 MB
> 15/03/17 21:27:09 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-502ef3f8-b8cd-45cf-b1df-97df297cdb35/httpd-6303e24d-4b2b-4614-bb1d-74e8d331189b
> 15/03/17 21:27:09 INFO spark.HttpServer: Starting HTTP Server
> 15/03/17 21:27:09 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/03/17 21:27:10 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:48000
> 15/03/17 21:27:10 INFO util.Utils: Successfully started service 'HTTP file server' on port 48000.
> 15/03/17 21:27:10 INFO spark.SparkEnv: Registering OutputCommitCoordinator
> 15/03/17 21:27:10 INFO server.Server: jetty-8.y.z-SNAPSHOT
> 15/03/17 21:27:10 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
> 15/03/17 21:27:10 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
> 15/03/17 21:27:10 INFO ui.SparkUI: Started SparkUI at http://Server3:4040
> 15/03/17 21:27:10 INFO spark.SparkContext: Added JAR file:/home/hduser/spark-java2.jar at http://192.168.11.42:48000/jars/spark-java2.jar with timestamp 1426598830307
> 15/03/17 21:27:10 INFO client.RMProxy: Connecting to ResourceManager at Server3/192.168.11.42:8050
> 15/03/17 21:27:11 INFO yarn.Client: Requesting a new application from cluster with 3 NodeManagers
> 15/03/17 21:27:11 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
> 15/03/17 21:27:11 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
> 15/03/17 21:27:11 INFO yarn.Client: Setting up container launch context for our AM
> 15/03/17 21:27:11 INFO yarn.Client: Preparing resources for our AM container
> 15/03/17 21:27:12 INFO yarn.Client: Uploading resource file:/home/hduser/spark-1.3.0/assembly/target/scala-2.10/spark-assembly-1.3.0-hadoop2.6.0.jar -> hdfs://Server3:9000/user/hduser/.sparkStaging/application_1426595477608_0002/spark-assembly-1.3.0-hadoop2.6.0.jar
> 15/03/17 21:27:21 INFO yarn.Client: Setting up the launch environment for our AM container
> 15/03/17 21:27:21 INFO spark.SecurityManager: Changing view Servers to: hduser
> 15/03/17 21:27:21 INFO spark.SecurityManager: Changing modify Servers to: hduser
> 15/03/17 21:27:21 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui Servers disabled; users with view permissions: Set(hduser); users with modify permissions: Set(hduser)
> 15/03/17 21:27:21 INFO yarn.Client: Submitting application 2 to ResourceManager
> 15/03/17 21:27:22 INFO impl.YarnClientImpl: Submitted application application_1426595477608_0002
> 15/03/17 21:27:23 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:23 INFO yarn.Client:
>          client token: N/A
>          diagnostics: N/A
>          ApplicationMaster host: N/A
>          ApplicationMaster RPC port: -1
>          queue: default
>          start time: 1426598841696
>          final status: UNDEFINED
>          tracking URL: http://Server3:8088/proxy/application_1426595477608_0002/
>          user: hduser
> 15/03/17 21:27:24 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:25 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:26 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:27 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:28 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:29 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:30 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:31 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:32 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:33 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:34 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:35 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:36 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:37 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:38 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:39 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:40 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:41 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:42 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:43 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:44 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:45 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:46 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:47 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:48 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:49 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:50 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:51 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:52 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:53 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:54 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:55 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:56 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:57 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:58 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:27:59 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:00 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:01 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:02 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:03 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:04 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:05 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:06 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:07 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:08 INFO yarn.Client: Application report for application_1426595477608_0002 (state: ACCEPTED)
> 15/03/17 21:28:08 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@Server2:42161/user/YarnAM#1971185024]
> 15/03/17 21:28:08 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> Server2, PROXY_URI_BASES -> http://Server2:8088/proxy/application_1426595477608_0002), /proxy/application_1426595477608_0002
> 15/03/17 21:28:08 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
> 15/03/17 21:28:09 INFO yarn.Client: Application report for application_1426595477608_0002 (state: RUNNING)
> 15/03/17 21:28:09 INFO yarn.Client:
>          client token: N/A
>          diagnostics: N/A
>          ApplicationMaster host: Server2
>          ApplicationMaster RPC port: 0
>          queue: default
>          start time: 1426598841696
>          final status: UNDEFINED
>          tracking URL: http://Server3:8088/proxy/application_1426595477608_0002/
>          user: hduser
> 15/03/17 21:28:09 INFO cluster.YarnClientSchedulerBackend: Application application_1426595477608_0002 has started running.
> 15/03/17 21:28:09 INFO netty.NettyBlockTransferService: Server created on 41323
> 15/03/17 21:28:09 INFO storage.BlockManagerMaster: Trying to register BlockManager
> 15/03/17 21:28:09 INFO storage.BlockManagerMasterActor: Registering block manager Server3:41323 with 265.1 MB RAM, BlockManagerId(<driver>, Server3, 41323)
> 15/03/17 21:28:09 INFO storage.BlockManagerMaster: Registered BlockManager
> 15/03/17 21:28:09 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
> 15/03/17 21:28:09 INFO storage.MemoryStore: ensureFreeSpace(85091) called with curMem=0, maxMem=278019440
> 15/03/17 21:28:09 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 83.1 KB, free 265.1 MB)
> 15/03/17 21:28:10 INFO storage.MemoryStore: ensureFreeSpace(36387) called with curMem=85091, maxMem=278019440
> 15/03/17 21:28:10 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 35.5 KB, free 265.0 MB)
> 15/03/17 21:28:10 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on Server3:41323 (size: 35.5 KB, free: 265.1 MB)
> 15/03/17 21:28:10 INFO storage.BlockManagerMaster: Updated info of block broadcast_0_piece0
> 15/03/17 21:28:10 INFO spark.SparkContext: Created broadcast 0 from textFile at Test1.java:30
> 15/03/17 21:28:10 INFO mapred.FileInputFormat: Total input paths to process : 1
> 15/03/17 21:28:10 INFO spark.SparkContext: Starting job: first at RowMatrix.scala:62
> 15/03/17 21:28:10 INFO scheduler.DAGScheduler: Got job 0 (first at RowMatrix.scala:62) with 1 output partitions (allowLocal=true)
> 15/03/17 21:28:10 INFO scheduler.DAGScheduler: Final stage: Stage 0(first at RowMatrix.scala:62)
> 15/03/17 21:28:10 INFO scheduler.DAGScheduler: Parents of final stage: List()
> 15/03/17 21:28:10 INFO scheduler.DAGScheduler: Missing parents: List()
> 15/03/17 21:28:10 INFO scheduler.DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[2] at map at Test1.java:33), which has no missing parents
> 15/03/17 21:28:10 INFO storage.MemoryStore: ensureFreeSpace(3784) called with curMem=121478, maxMem=278019440
> 15/03/17 21:28:10 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.7 KB, free 265.0 MB)
> 15/03/17 21:28:10 INFO storage.MemoryStore: ensureFreeSpace(2755) called with curMem=125262, maxMem=278019440
> 15/03/17 21:28:10 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.7 KB, free 265.0 MB)
> 15/03/17 21:28:10 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on Server3:41323 (size: 2.7 KB, free: 265.1 MB)
> 15/03/17 21:28:10 INFO storage.BlockManagerMaster: Updated info of block broadcast_1_piece0
> 15/03/17 21:28:10 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:839
> 15/03/17 21:28:10 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from Stage 0 (MapPartitionsRDD[2] at map at Test1.java:33)
> 15/03/17 21:28:10 INFO cluster.YarnScheduler: Adding task set 0.0 with 1 tasks
> 15/03/17 21:28:10 WARN remote.ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkYarnAM@Server2:42161] has failed, address is now gated for [5000] ms. Reason is: [Disassociated].
> 15/03/17 21:28:14 INFO cluster.YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@Server2:34207/user/YarnAM#1946221926]
> 15/03/17 21:28:14 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> Server2, PROXY_URI_BASES -> http://Server2:8088/proxy/application_1426595477608_0002), /proxy/application_1426595477608_0002
> 15/03/17 21:28:14 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org