You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Meethu Mathew (JIRA)" <ji...@apache.org> on 2017/02/21 05:41:44 UTC

[jira] [Created] (ZEPPELIN-2141) sc.addPyFile("hdfs://path/to file) in zeppelin causing UnKnownHostException

Meethu Mathew created ZEPPELIN-2141:
---------------------------------------

             Summary: sc.addPyFile("hdfs://path/to file) in zeppelin causing UnKnownHostException
                 Key: ZEPPELIN-2141
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-2141
             Project: Zeppelin
          Issue Type: Bug
          Components: pySpark
    Affects Versions: 0.6.0
            Reporter: Meethu Mathew
            Priority: Minor


In the documentation of sc.addPyFile(0 its is mentioned that "
Add a .py or .zip dependency for all tasks to be executed on this SparkContext in the future. The path passed can be either a local file, a file in HDFS (or other Hadoop-supported filesystems), or an HTTP, HTTPS or FTP URI"

But when I added an HDFS path in the method in zeppelin, it results in the following exception:
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, demo-node4.flytxt.com): java.lang.IllegalArgumentException: java.net.UnknownHostException: flycluster

Spark version used is 1.6.2. The same command is working fine with pyspark shell and hence I think something is wrong  with Zeppelin



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)