You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Jeff Zhang <zj...@gmail.com> on 2016/07/29 00:13:40 UTC

Re: spark run shell On yarn

One workaround is disable timeline in yarn-site,

set yarn.timeline-service.enabled as false in yarn-site.xml

On Thu, Jul 28, 2016 at 5:31 PM, censj <ce...@lotuseed.com> wrote:

> 16/07/28 17:07:34 WARN shortcircuit.DomainSocketFactory: The short-circuit
> local reads feature cannot be used because libhadoop cannot be loaded.
> java.lang.NoClassDefFoundError:
> com/sun/jersey/api/client/config/ClientConfig
>   at
> org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:45)
>   at
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:163)
>   at
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>   at
> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:150)
>   at
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
>   at
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
>   at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
>   at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2256)
>   at
> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
>   at
> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
>   at scala.Option.getOrElse(Option.scala:121)
>   at
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
>   at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
>   ... 47 elided
> Caused by: java.lang.ClassNotFoundException:
> com.sun.jersey.api.client.config.ClientConfig
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   ... 60 more
> <console>:14: error: not found: value spark
>        import spark.implicits._
>               ^
> <console>:14: error: not found: value spark
>        import spark.sql
>               ^
> Welcome to
>
>
>
>
> hi:
> I use spark 2.0,but when I run
>  "/etc/spark-2.0.0-bin-hadoop2.6/bin/spark-shell --master yarn” , appear
> this Error.
>
> /etc/spark-2.0.0-bin-hadoop2.6/bin/spark-submit
> export YARN_CONF_DIR=/etc/hadoop/conf
> export HADOOP_CONF_DIR=/etc/hadoop/conf
> export SPARK_HOME=/etc/spark-2.0.0-bin-hadoop2.6
>
>
> how I to update?
>
>
>
>
>
> ===============================
> Name: cen sujun
> Mobile: 13067874572
> Mail: censj@lotuseed.com
>
>


-- 
Best Regards

Jeff Zhang

Re: spark run shell On yarn

Posted by Marcelo Vanzin <va...@cloudera.com>.
Well, it's more of an unfortunate incompatibility caused by dependency
hell. There's a YARN issue to make this better by avoiding that code
path when it's not needed, but I'm not sure what's the status of that.

On Thu, Jul 28, 2016 at 6:54 PM, censj <ce...@lotuseed.com> wrote:
> ok !  solved !!
> But this is a bug?
> ===============================
> Name: cen sujun
> Mobile: 13067874572
> Mail: censj@lotuseed.com
>
> 在 2016年7月29日,08:19,Marcelo Vanzin <va...@cloudera.com> 写道:
>
> spark.hadoop.yarn.timeline-service.enabled=false
>
>



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org


Re: spark run shell On yarn

Posted by Marcelo Vanzin <va...@cloudera.com>.
You can probably do that in Spark's conf too:

spark.hadoop.yarn.timeline-service.enabled=false

On Thu, Jul 28, 2016 at 5:13 PM, Jeff Zhang <zj...@gmail.com> wrote:
> One workaround is disable timeline in yarn-site,
>
> set yarn.timeline-service.enabled as false in yarn-site.xml
>
> On Thu, Jul 28, 2016 at 5:31 PM, censj <ce...@lotuseed.com> wrote:
>>
>> 16/07/28 17:07:34 WARN shortcircuit.DomainSocketFactory: The short-circuit
>> local reads feature cannot be used because libhadoop cannot be loaded.
>> java.lang.NoClassDefFoundError:
>> com/sun/jersey/api/client/config/ClientConfig
>>   at
>> org.apache.hadoop.yarn.client.api.TimelineClient.createTimelineClient(TimelineClient.java:45)
>>   at
>> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:163)
>>   at
>> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
>>   at
>> org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:150)
>>   at
>> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)
>>   at
>> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:149)
>>   at org.apache.spark.SparkContext.<init>(SparkContext.scala:500)
>>   at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2256)
>>   at
>> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:831)
>>   at
>> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:823)
>>   at scala.Option.getOrElse(Option.scala:121)
>>   at
>> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:823)
>>   at org.apache.spark.repl.Main$.createSparkSession(Main.scala:95)
>>   ... 47 elided
>> Caused by: java.lang.ClassNotFoundException:
>> com.sun.jersey.api.client.config.ClientConfig
>>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>   ... 60 more
>> <console>:14: error: not found: value spark
>>        import spark.implicits._
>>               ^
>> <console>:14: error: not found: value spark
>>        import spark.sql
>>               ^
>> Welcome to
>>
>>
>>
>>
>> hi:
>> I use spark 2.0,but when I run
>> "/etc/spark-2.0.0-bin-hadoop2.6/bin/spark-shell --master yarn” , appear this
>> Error.
>>
>> /etc/spark-2.0.0-bin-hadoop2.6/bin/spark-submit
>> export YARN_CONF_DIR=/etc/hadoop/conf
>> export HADOOP_CONF_DIR=/etc/hadoop/conf
>> export SPARK_HOME=/etc/spark-2.0.0-bin-hadoop2.6
>>
>>
>> how I to update?
>>
>>
>>
>>
>>
>> ===============================
>> Name: cen sujun
>> Mobile: 13067874572
>> Mail: censj@lotuseed.com
>>
>
>
>
> --
> Best Regards
>
> Jeff Zhang



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org